Skip to main content
Social Sci LibreTexts

1.5: Algorithms

  • Page ID
    210717
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Invisible, Irreversible, and Infinite

    Diana Daly

    Key points

    • Computers execute tasks through simple step-by-step instructions, breaking down complex actions.
    • Human adaptability contrasts with computers’ literal interpretation, evident in the need for explicit instructions.
    • Human software developers significantly shape the capabilities of modern computers.
    • Programming languages reflect biases, affecting the diversity of computer programming practitioners.
    • The Three I’s – Invisible, Irreversible, and Infinite – pose challenges in algorithmic decision-making, leading to opaque, permanent, and extensive biases.

    In this chapter

    Student insights: First experience with technology (video by Blaze Mutware, Spring 2021)

    An interactive H5P element has been excluded from this version of the text. You can view it online here: https://opentextbooks.library.arizona.edu/humansrsocialmedia/?p=63

    "}" data-sheets-userformat="{"2":513,"3":{"1":0},"12":0}">Graphic profile image provided by the student author depicting a dark-skinned person with a red sleeveless shirt, looking into the distance with dark hair, in front of a lime green background with yellow polka dots.

    An interactive H5P element has been excluded from this version of the text. You can view it online here:
    https://opentextbooks.library.arizona.edu/humansrsocialmedia/?p=63#h5p-93

    An interactive H5P element has been excluded from this version of the text. You can view it online here:
    https://opentextbooks.library.arizona.edu/humansrsocialmedia/?p=63#h5p-324

    Section 4: How can computers carry bias?

    Many people think computers and algorithms are neutral – racism and sexism are not programmers’ problems. In the case of Tay’s programmers, this false belief enabled more hate speech online and led to the embarrassment of their employer. Human-crafted computer programs mediate nearly everything humans do today, and human responses are involved in many of those tasks. Considering the near-infinite extent to which algorithms and their activities are replicated, the presence of human biases is a devastating threat to computer-dependent societies in general and to those targeted or harmed by those biases in particular.

    A white man wearing a pair of Google Glass smart glasses. The glasses have a sleek, modern design with a small rectangular display over the right eye. The background is blurred, focusing attention on the glasses and the person's face.
    Google Glass was considered by some to be an example of a poor decision by a homogenous workforce.

    Problems like these are rampant in the tech industry because there is a damaging belief in the US (and some other) societies that the development of computer technologies is antisocial, and that some kinds of people are better at it than others. As a result of this bias in tech industries and computing, there are not enough kinds of people working on tech development teams: not enough women, not enough people who are not white, not enough people who remember to think of children, not enough people who think socially.

    Remember Google Glass? You may not; that product failed because few people wanted interaction with a computer to come between themselves and eye contact with humans and the world. People who fit the definition of “tech nerd” fell within this small demographic, but the sentiment was not shared by the broader community of technology users. Critics labeled the unfortunate people who did purchase the product as “glassholes.”

    Section 5: Exacerbating Bias in Algorithms: The Three I’s

    In its early years, the internet was viewed as a utopia, an ideal world that would permit a completely free flow of all available information to everyone, equally. John Perry Barlow’s 1996 Declaration of the Independence of Cyberspace represents this utopian vision, in which the internet liberates users from all biases and even from their own bodies (at which human biases are so often directed). Barlow’s utopian vision does not match the internet of today. Our social norms and inequalities accompany us across all the media and sites we use, and worsened in a climate where information value is determined by marketability and profit, as Sociologist Zeynep Tufecki explains in this Ted Talk.

    Because algorithms are built on human cooperation with computing programs, human selectivity and human flaws are embedded within algorithms. Humans as users carry our own biases, and today there is particular concern that algorithms pick up and spread these biases to many, many others. They even make us more biased by hiding results that the algorithm calculates we may not like. When we get our news and information from social media, invisible algorithms consider our own biases and those of friends in our social networks to determine which new posts and stories to show us in search results and news feeds. The result for each user can be called their echo chamber or as author Eli Pariser describes it, a filter bubble in which we only see news and information we like and agree with, leading to political polarization.

    Although algorithms can generate very sophisticated recommendations, algorithms do not make sophisticated decisions. When humans make poor decisions, they can rely on themselves or on other humans to recognize and reverse the error; at the very least, a human decision-maker can be held responsible. Human decision-making often takes time and critical reflection to implement, such as the writing of an approved ordinance into law. When algorithms are used in place of human decision-making, I describe what ensues as The three I's: Algorithms’ decisions become invisible, irreversible, and infinite. Most social media platforms and many organizations using algorithms will not share how their algorithms work; for this lack of transparency, they are known as black box algorithms.

    Student Insights: microcelebrity in the age of algorithms (writing by Lily, Spring 2021)

    affordances of social media, our society has turned to using the platform for more selfish reasons- such as the fame granted when going viral.\n\nThere are some noticeable pros and cons that are intertwined with media spreadability. This term highlights how media is continually spread and then passed on to others, continuing the chain.\n\nAt this point, almost everyone has had exposure through the media. In terms of spreadability, exposure happens much more quickly. One second a video is posted and the next, it could have thousands of views. This was the case with now famous influencer, Emma Chamberlain.\n\nCurrently, Emma Chamberlain has accumulated an astounding total of nearly ten million subscribers and counting. At only 19, she has established a huge platform for herself and when she started as a young high school girl nearly three years ago, I can guarantee she had no idea what was in store for her in terms of success. As of now, she has won three awards for her Youtube career: a People's Choice Award, a Shorty Award, and lastly, a Teen Choice Award. Her breakout Youtube fame allowed her to then write her own book, create a podcast, and even make and sell merchandise. All of this success due to a few viral videos that skyrocketed a young girl's career. How did her videos spread so quickly? Was her content really that appealing to her audience? Did she face any backlash? What type of content tends to go viral? Chelsea Galvin is here to give her insight on these types of questions we all have.\n\nEmma Chamberlain was just an ordinary girl from Belmont, California. Who else is a teenage girl from Belmont, California? My roommate Chelsea Galvin. She was a primary witness in Emma Chamberlain's claim to fame. Both girls are from the same hometown, attended the same high school, and had the same classes.\n\nChelsea is no stranger to the realm of social media. She uses popular apps such as instagram, tiktok, and snapchat (her favorite as of now). She is familiar with the various different Algorthims that appear on her explorer and \"for you\" pages. She typically watches videos about house decor, food, and videos of friends just having fun.\n\nShe believes that Emma Chamberlain's content was relevant for teen girls today. Her content is \"different from mainstream media and what we usually see on youtube\" and is associated with certain algorithms that relate to young teens today. Ultimately, Emma Chamberlain became so well known for her unstaged and realistic content that she is now easily recognizable by so many people.\n\nAlthough having a presence in the media may seem extremely desirable, there are always obstacles and hardships that must be overcome. Cancel culture. Currently, this is a big part of having a media presence. Individuals must always be aware about what they post in order to avoid upsetting others, whether it is intentional or not.\n\nAs Chelsea describes it, \"a lot of attention brings a lot of people just wanting to hate or ruin things for people\" and I totally agree. We are all human and we all make mistakes but when those mistakes resurface online due to spreadability and a face in the media, those select individuals have a harder time than those who are not in the spotlight.\n\nUltimately anyone's content can spread and go viral, it is just a matter of time and good, relatable videos- like Emma Chamberlain posts. Both Chelsea and I believe that it is important to be educated on this topic, especially in times like this where social media plays a large role in our daily lives. We have both seen the more obstructive side of social media when content goes viral and we agree that it's vital to be prepared for the outcome.\n\nIt was a joy to chat with Chelsea and learn more about our perspectives on the media and what content is specific to our two feeds. I learned so much from her and her story and it really helped me conceptualize the term spreadability and how it occurs in reality."}" data-sheets-userformat="{"2":8993,"3":{"1":0},"8":{"1":[{"1":2,"2":0,"5":{"1":2,"2":0}},{"1":0,"2":0,"3":3},{"1":1,"2":0,"4":1}]},"11":3,"12":0,"16":11}">Graphic of author depicting a young woman, faceless, with a black blouse and dark hair pulled into a messy ponytail. The background is purple with light purple polka dots.

    An interactive H5P element has been excluded from this version of the text. You can view it online here:
    https://opentextbooks.library.arizona.edu/humansrsocialmedia/?p=63#h5p-325

    Respond to this case study: how might a creator change their content to affect a platform’s algorithm? How can creators and users learn more about the algorithms affecting them? How might platforms benefit from sharing more information about their algorithms? Why might they want to keep some things hidden from users?

    Exposing Invisible Algorithms: Pro Publica

    Journalists at Pro Publica are educating the public on what algorithms can do by explaining and testing black box algorithms. This work is particularly valuable because most algorithmic bias is hard to detect for small groups or individual human users. Studies like ProPublica’s presented in the “Breaking the Black Box” series (below) have been based on groups systematically testing algorithms from different machines, locations, and users.

    One or more interactive elements has been excluded from this version of the text. You can view them online here: https://opentextbooks.library.arizona.edu/humansrsocialmedia/?p=63#oembed-5

    Want to see more? There are four episodes in this series, available in full here. Following are links to these episodes with captions: Episode 2Episode 3Episode 4.

    Section 6: Fighting Unjust Algorithms

    Algorithms are laden with errors. Some of these errors can be traced to the biases of those that developed them, as when a facial recognition system meant for global implementation is only trained using data sets from a limited population (say, predominantly white or male). Algorithms can become problematic when they are hacked by groups of users, as Microsoft’s Tay was. Algorithms are also grounded in the values of those who shape them, and these values may reward some involved while disenfranchising others.

    Despite their flaws, algorithms are increasingly used in heavily consequential ways. They predict how likely a person is to commit a crime or default on a bank loan based on a given data set. They can target users with messages on social media that are customized to fit their interests, their voting preferences, or their fears. They can identify who is in photos online or in recordings of offline spaces.

    Confronting the landscape of increasing algorithmic control is activism to limit the control of algorithms over human lives. Below, read about the work of the Algorithmic Justice League and other activists promoting bans on facial recognition. And consider: What roles might algorithms play in your life that may deserve more attention, scrutiny, and even activism?

    The Algorithmic Justice League vs facial recognition tech in Boston

    MIT Computer Scientist and “Poet of Code” Joy Buolamwini heads the Algorithmic Justice League, an organization making remarkable headway into fighting facial recognition technologies, whose work she explains in the first video below. On June 9th, 2020, Buolamwini and other computer scientists presented alongside citizens at the Boston City Council meeting in support of a proposed ordinance banning facial recognition in public spaces in the city. Held and shared by live stream during COVID-19, footage of this meeting offers a remarkable look at the value of human advocacy in shaping the future of social technologies. The second video below should be cued to the beginning of Buolamwini’s testimony half an hour in. Boston’s City Council subsequently voted unanimously to ban facial recognition technologies by the city.

    One or more interactive elements has been excluded from this version of the text. You can view them online here: https://opentextbooks.library.arizona.edu/humansrsocialmedia/?p=63#oembed-2

    One or more interactive elements has been excluded from this version of the text. You can view them online here: https://opentextbooks.library.arizona.edu/humansrsocialmedia/?p=63#oembed-3

    Parasocial and Parasitical — Social Media and Ourselves podcast

    One or more interactive elements has been excluded from this version of the text. You can view them online here: https://opentextbooks.library.arizona.edu/humansrsocialmedia/?p=63#oembed-6

    Parasocial and Parasitical

    Release date: September 1st, 2021

    An interview with Dr. Victor Braitberg about the machinations by social media platforms that help us form online relationships – and help them profit from it all. Is this good? Bad? Or The Truman Show? Produced by Jacquie Kuru and Gabe Stultz.

    LISTENLISTEN WITH TRANSCRIPT

    Respond to this podcast episode…How did the podcast episode “Parasocial and Parasitical” use interviews, student voices, or sounds to demonstrate a current or past social trend phenomenon? If you were making a sequel to this episode, what voices or sounds would you include to help listeners understand more about this trend, and why?

    Core Concepts and Questions

    Core Concepts

    algorithm

    a step-by-step set of instructions for getting something done to serve humans, whether that something is making a decision, solving a problem, or getting from point A to point B (or point Z)

    biases

    assumptions about a person, culture, or population

    black box algorithms

    the term used when processes created for computer-based decision-making is not shared with or made clear to outsiders

    filter bubble

    a term coined by Eli Pariser, also called an echo chamber. A phenomenon in which we only see news and information we like and agree with, leading to political polarization

    The three I's

    algorithms’ decisions can become invisible, irreversible, and infinite

    why computers seem so smart today

    cooperation from human software developers, and cooperation on the part of users

    Core Questions

    A. Questions for qualitative thought:

    1. Write and/or draw an algorithm (or your best try at one) to perform an activity you wish you could automate. Doing the dishes? Taking an English test? It’s up to you.
    2. Often there are spaces online that make one feel like an outsider, or like an insider. Study an online space that makes you feel like one of these – how it that outsider or insider status being communicated to you, or to others?
    3. Consider the history of how you learned whatever you know about computing. This could mean how you came to understand key terms, searching online simple programs, coding, etc. Then, reinvent that history if you’d learned all you wish you knew about computing at the times and in the ways you feel you should have learned them.

    B. Review: Let’s test how well you’ve been programmed. (Mark the best answers.)

    An interactive H5P element has been excluded from this version of the text. You can view it online here:
    https://opentextbooks.library.arizona.edu/humansrsocialmedia/?p=63#h5p-26

    An interactive H5P element has been excluded from this version of the text. You can view it online here:
    https://opentextbooks.library.arizona.edu/humansrsocialmedia/?p=63#h5p-27

    An interactive H5P element has been excluded from this version of the text. You can view it online here:
    https://opentextbooks.library.arizona.edu/humansrsocialmedia/?p=63#h5p-28

    An interactive H5P element has been excluded from this version of the text. You can view it online here:
    https://opentextbooks.library.arizona.edu/humansrsocialmedia/?p=63#h5p-29

    C. Game on!

    An interactive H5P element has been excluded from this version of the text. You can view it online here:
    https://opentextbooks.library.arizona.edu/humansrsocialmedia/?p=63#h5p-309

    An interactive H5P element has been excluded from this version of the text. You can view it online here:
    https://opentextbooks.library.arizona.edu/humansrsocialmedia/?p=63#h5p-147

    Related Content

    Hear It: Electronic Freedom Foundation’s “Algorithms for a Just Future”

    https://player.simplecast.com/72c98d21-5c9a-44fa-ae90-c2adcd4d6766?dark=false

    EPISODE SUMMARY

    The United States already has laws against redlining, but companies can still use other data to advertise goods and services to you—which can have big implications for the prices you see.

    EPISODE NOTES

    One of the supposed promises of AI was that it would be able to take the bias out of human decisions, and maybe even lead to more equity in society. But the reality is that the errors of the past are embedded in the data of today, keeping prejudice and discrimination in. Pair that with surveillance capitalism, and what you get are algorithms that impact the way consumers are treated, from how much they pay for things, to what kinds of ads they are shown, to if a bank will even lend them money. But it doesn’t have to be that way, because the same techniques that prey on people can lift them up. Vinhcent Le from the Greenlining Institute joins Cindy and Danny to talk about how AI can be used to make things easier for people who need a break. In this episode you’ll learn about:

      • Redlining—the pernicious system that denies historically marginalized people access to loans and financial services—and how modern civil rights laws have attempted to ban this practice.
      • How the vast amount of our data collected through modern technology, especially browsing the Web, is often used to target consumers for products, and in effect recreates the illegal practice of redlining.
      • The weaknesses of the consent-based models for safeguarding consumer privacy, which often mean that people are unknowingly waving away their privacy whenever they agree to a website’s terms of service.
      • How the United States currently has an insufficient patchwork of state laws that guard different types of data, and how a federal privacy law is needed to set a floor for basic privacy protections.
      • How we might reimagine machine learning as a tool that actively helps us root out and combat bias in consumer-facing financial services and pricing, rather than exacerbating those problems.
      • The importance of transparency in the algorithms that make decisions about our lives.
      • How we might create technology to help consumers better understand the government services available to them.

    This podcast is supported by the Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. This work is licensed under a Creative Commons Attribution 4.0 International License.

    Additional music is used under creative commons license from CCMixter includes:

    http://dig.ccmixter.org/files/djlang59/37792

    Drops of H2O ( The Filtered Water Treatment ) by J.Lang (c) copyright 2012 Licensed under a Creative Commons Attribution (3.0) license.http://dig.ccmixter.org/files/djlang59/37792 Ft: Airtone

    http://dig.ccmixter.org/files/zep_hurme/59681

    Come Inside by Zep Hurme (c) copyright 2019 Licensed under a Creative Commons Attribution (3.0) license. http://dig.ccmixter.org/files/zep_hurme/59681 Ft: snowflake

    http://dig.ccmixter.org/files/admiralbob77/59533

    Warm Vacuum Tube by Admiral Bob (c) copyright 2019 Licensed under a Creative Commons Attribution (3.0) license. http://dig.ccmixter.org/files/admiralbob77/59533 Ft: starfrosch

    http://dig.ccmixter.org/files/airtone/59721

    reCreation by airtone (c) copyright 2019 Licensed under a Creative Commons Attribution (3.0) license. http://dig.ccmixter.org/files/airtone/59721

    Read it: Social media algorithms warp how people learn from each other, research shows

    A picture of a darkened room and a blurred profile of a person looking at an illuminated laptop screen of a Twitter page with a news media photo in the center.
    Social media pushes evolutionary buttons.
    AP Photo/Manish Swarup

    William Brady, Northwestern University

    People’s daily interactions with online algorithms affect how they learn from others, with negative consequences including social misperceptions, conflict and the spread of misinformation, my colleagues and I have found.

    People are increasingly interacting with others in social media environments where algorithms control the flow of social information they see. Algorithms determine in part which messages, which people and which ideas social media users see.

    On social media platforms, algorithms are mainly designed to amplify information that sustains engagement, meaning they keep people clicking on content and coming back to the platforms. I’m a social psychologist, and my colleagues and I have found evidence suggesting that a side effect of this design is that algorithms amplify information people are strongly biased to learn from. We call this information “PRIME,” for prestigious, in-group, moral and emotional information.

    In our evolutionary past, biases to learn from PRIME information were very advantageous: Learning from prestigious individuals is efficient because these people are successful and their behavior can be copied. Paying attention to people who violate moral norms is important because sanctioning them helps the community maintain cooperation.

    But what happens when PRIME information becomes amplified by algorithms and some people exploit algorithm amplification to promote themselves? Prestige becomes a poor signal of success because people can fake prestige on social media. Newsfeeds become oversaturated with negative and moral information so that there is conflict rather than cooperation.

    The interaction of human psychology and algorithm amplification leads to dysfunction because social learning supports cooperation and problem-solving, but social media algorithms are designed to increase engagement. We call this mismatch functional misalignment.

    Why it matters

    One of the key outcomes of functional misalignment in algorithm-mediated social learning is that people start to form incorrect perceptions of their social world. For example, recent research suggests that when algorithms selectively amplify more extreme political views, people begin to think that their political in-group and out-group are more sharply divided than they really are. Such “false polarization” might be an important source of greater political conflict.

    https://youtube.com/watch?v=WLfr7sU5...nt%26start%3D0
    Social media algorithms amplify extreme political views.

    Functional misalignment can also lead to greater spread of misinformation. A recent study suggests that people who are spreading political misinformation leverage moral and emotional information – for example, posts that provoke moral outrage – in order to get people to share it more. When algorithms amplify moral and emotional information, misinformation gets included in the amplification.

    What other research is being done

    In general, research on this topic is in its infancy, but there are new studies emerging that examine key components of algorithm-mediated social learning. Some studies have demonstrated that social media algorithms clearly amplify PRIME information.

    Whether this amplification leads to offline polarization is hotly contested at the moment. A recent experiment found evidence that Meta’s newsfeed increases polarization, but another experiment that involved a collaboration with Meta found no evidence of polarization increasing due to exposure to their algorithmic Facebook newsfeed.

    More research is needed to fully understand the outcomes that emerge when humans and algorithms interact in feedback loops of social learning. Social media companies have most of the needed data, and I believe that they should give academic researchers access to it while also balancing ethical concerns such as privacy.

    What’s next

    A key question is what can be done to make algorithms foster accurate human social learning rather than exploit social learning biases. My research team is working on new algorithm designs that increase engagement while also penalizing PRIME information. We argue that this might maintain user activity that social media platforms seek, but also make people’s social perceptions more accurate.

    The Research Brief is a short take on interesting academic work.

    William Brady, Assistant Professor of Management and Organizations, Northwestern University

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    About the author

    Dr. Diana Daly of the University of Arizona is the Director of iVoices, a media lab helping students produce media from their narratives on technologies. Prof Daly teaches about qualitative research, social media, and information quality at the University of Arizona.

    Media Attributions


    This page titled 1.5: Algorithms is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Diana Daly, Jacquie Kuru, Nathan Schneider, Alexandria Fripp, and iVoices Media Lab via source content that was edited to the style and standards of the LibreTexts platform.