Skip to main content
Social Sci LibreTexts

2.1: Algorithmic Bias

  • Page ID
    207220
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    One of the most pressing ethical concerns of AI is algorithmic bias. Algorithmic bias occurs when the data used to train AI systems reflects the biases and prejudices of society, resulting in discriminatory outputs.

    ChatGPT is a prime example of an AI system that can suffer from algorithmic bias. It is a large language model that is trained on a massive dataset, including the “common crawl” which contains over 12 years’ worth of web pages. While these datasets give the models tremendous capabilities, they are inherently biased. Indiscriminately scraping the internet for data means that the dataset can contain racist, sexist, ableist, and otherwise discriminatory language. As a result, ChatGPT can produce outputs that perpetuate these biases and prejudices.

    Moreover, AI models can reflect the biases and prejudices of society as a whole. Just like any other society, the online community underrepresents marginalised groups and overrepresents others. For instance, the prevalence of racism and bigotry on sources like Reddit and Twitter can bleed through the datasets and be reproduced in the output of AI models.

    Algorithmic bias can also occur during the methods of training and reinforcement used when developing AI systems. For example, predictive policing systems used by law enforcement agencies in the US disproportionately target poor, Black, and Latinx communities, reinforcing existing systemic biases.


    2.1: Algorithmic Bias is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?