Skip to main content
Social Sci LibreTexts

1.3: Advanced

  • Page ID
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    Affect recognition

    Affect recognition means interpreting a person’s emotions through their facial expressions, body language, speech patterns and actions. It’s a controversial practice that has been widely criticised for poor research methodologies and inconsistent results.

    Despite these controversies, affect recognition is an industry worth billions of dollars. It is also an industry that has already made its way into education. A system named 4 Little Trees, developed in Hong Kong, claimed to be able to monitor children’s facial expressions and to assign labels for emotions such as ‘happy’, ‘sad’, and ‘angry’. The system also claimed to be able to identify motivation and to predict grades.

    Affect recognition is problematic on a number of levels. As well as the aforementioned question mark over its accuracy, many people question whether emotions should be “datafied” at all. There are privacy concerns with affect recognition being built into surveillance technology, including in schools. And, similar to the issues with bias discussed earlier, affect recognition technology can perpetuate discrimination. In one example, an algorithm trained to identify possible “terrorist behaviour” resulted in racial profiling.

    Teaching points

    Human labour

    The ethical concern of AI and human labour is a two sided coin. On the one hand there are always fears that machine automation will replace jobs, even in white collar industries like law and finance. On the other is the fact that current AI systems actually rely on a tremendous amount of dangerous, low-paid human labour.

    The “robots taking our jobs” argument goes back a long way. In the 16th Century, Queen Elizabeth I rejected an application for a patent on a stocking making machine, for fears it would put too many stocking-makers out of work. In more recent years, old fears of AI replacing human “knowledge work” have been reignited by ever-more powerful models like GPT-3. And though most commentators are quick to claim that AI will never replace teachers, some have made predictions that some or all parts of the job could be automated by as early as 2027.

    Hidden beneath the rhetoric of the jobs AI will destroy, however, is an unseen narrative of the jobs it currently requires to function. It is useful for the companies behind AI technology that the public views it as something mysterious and almost magical. Current advances like ChatGPT and Midjourney seem to be able to produce countless outputs in text and image with little input. But there is human labour powering the magic.

    A recent article by Time magazine explored the harsh conditions of the Kenyan workers employed by OpenAI to label inappropriate data for its language model. Working for less than $2 an hour, these labourers were partly responsible for training an AI algorithm to identify harsh language, graphic, sexual, and violence phrases, and other “toxic” text. Workers were required to read and label huge amounts of this data, with some reporting the experience as deeply traumatic.

    Teaching points

    Subject examples

    workers on a conveyor belt heading towards a drop into an abyss. workers carrying dollar signs. dollar sign carrying factory line workers on a conveyor. Shadows and darkness in shades of black and yellow. Dramatic feature editorial header image collage illustration. --ar 3:2 --q 2 --v 4

    The human costs of AI labour are more than just job cuts.
    Image via Midjourney. Prompt in alt text.

    Power and hegemony

    This final ethical concern brings us full circle back to “bias”, but with a more nuanced perspective. Because the data AI models are built on is “frozen in time”, it represents a static world view which encodes existing power and hierarchies in society. The reinforcement of the hegemony can further oppress and marginalise already disadvantaged people.

    Think of AI as a self-perpetuating cycle. The datasets encode a certain power structure into the model – often the dominance of a heterosexual, white, Western, male perspective due to the volume of content on the internet from that lens. This is then reflected in the output, which may be used to train future models by generating “synthetic data”. Although efforts are underway to make “fair” synthetic data, it has still be found to reproduce biases.

    AI also reinforces global hegemonies both in political and corporate terms. Countries and organisations need access to wealth, energy, and resources to successfully train and scale up AI models. This means that powerful AI is increasingly concentrated in the hands of those who already have the most. Actions like those outlined above in “human labour” further entrench the divide between the wealthy countries who produce AI and the poorer countries who bear the brunt of the human and environmental costs.

    Teaching points

    Subject examples

    1.3: Advanced is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?