1.4: Algorithms
- Last updated
- Save as PDF
- Page ID
- 207058
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)4
Invisible, Irreversible, and Infinite
Nearly any software platform you use performs its work based on algorithms, which enable it to make rapid decisions and respond predictably to stimuli. An algorithm is a step-by-step set of instructions for getting something done, whether that something is making a decision, solving a problem, or getting from point A to point B (or point Z). In this chapter, we will look into how computing algorithms work, who tends to create them, and how that affects their outcomes. We will also consider whether certain algorithms should be used at all.
Case Study: (Anti-)Social Media Algorithms
Student Content, Fall 2020
Social Media During School
Growing up in the age of technology there was something really compelling to me about not joining a lot of the mainstream social media platforms. The only platforms I used were the ones with substantial barriers, or barrier SMP’s, as I like to call them, like YouTube, which basically is the type of media where it is hard for small start-up users to maintain a healthy and large channel. On IG or TikTok, the same is much easier, which is proved by the fact that a whole lot of your friends will maintain their social media presence on IG rather than YouTube. Honestly, I can’t tell you why, I just felt that I couldn’t be trusted if I had something like Instagram or TikTok on my phone; I would use it whenever I had the slightest whim.
I feel like in the last year of high school I was this non-participating observer of the effects of social media use in school (especially in my group of friends). There was this very odd escapism about it. When we stopped talking to each other to just stare down into our phones, it seemed to me that the hyperbolic technological deterministic line of kids’ brains being trashed because of their phones had some weight to it.
Commonly enough, it’d always be Instagram or TikTok. Almost never anything else. The reason I think this is so is because of the fact that these two social media platforms have a really high hookability and the media pieces were very short.
Hookability refers to the ease at which one can go from one piece of media to the next, on the same platform. It is not an affordability, as it has nothing to do with how people communicate, and it has everything to do with addictivity. If paired up with the right algorithms, a high hookability measure can make almost anyone addicted to the platform. Hookability measures, say, the ease to go from one video on YouTube to the next.
But the problem with IG and tiktok was that, despite the high hookability, the media pieces were of a much shorter duration. So instead of just watching one five-minute YouTube video, you have to go through nearly ten times as many tiktoks or scroll through quite a few more IG posts. The addictive nature of the media platform is reduced by this, as you’d get fatigued quicker. So the fatigue combats the high hookability to reduce overall addictivity.
This idea of branchability exists too, and again depends on the algorithm, and the greater the branching choices, the better audience retention. But it’s tricky with shorter media items. More choice will lead to more fatigue, so that could be a negative effect, but also if you don’t have enough choices then how are you going to keep them watching? You can’t have them searching for the next video every time, can you?
It seems like there exists a problem of optimization with constraints. To get the maximum amount of people addicted, which as you could imagine would be the only aim for platforms (as they’d get more profits), they have to consider a plethora of variables, which is in no sense easy.
And the amazing thing here is that in school, we have an option to do something better! Many a times we find ourselves really fatigued but we just keep scrolling through, because of our inertia, and though we’d much rather do something else, we don’t have any other feasible thing to do, so we just stick to doing what we were doing. But in school, we look up from our phones and see our friends, so this inertia is much, much easier to break than if you were sitting all alone at home and were bored as frick.
I think it’s important to say that I haven’t done any rigorous research on this, but I feel like it really makes sense and I see these patterns in nearly all the social groups that I’m in.
About the Author
Omar is a freshman at UofA. On a sunny day, he likes to stay inside and eat mac n cheese with chicken nuggets. A not very well-known fact is that he graduated high school. But of course he did.
Respond to this case study…This writer used the research practice of observation to break down types of online spaces and practices. What are the benefits and challenges of drawing your knowledge about social media platforms from your own research? Demonstrate by studying the types of social media in your world.
Humans make computers what they are.
Most platforms have many algorithms at work at once, which can make the work they do seem so complex it’s almost magical. But all functions of digital devices can be reduced to simple steps if needed. The steps have to be simple because computers interpret instructions very literally.
Computers don’t know anything unless someone has already given them instructions that are explicit, with every step fully explained. Humans, on the other hand, can figure things out if you skip steps, and can make sense of tacit instructions. But give a computer instructions that skip steps or include tacit steps, and the computer will either stop working or get the process wrong without human intervention.
Here’s an example of the human cooperation that goes into the giving and following of instructions, demonstrated with a robot.
As an instructor, I can say to human students on the first day of class, “Let’s go around the room. Tell us who you are and where you’re from.” Easy for humans, right? But imagine I try that in a mixed human/robot classroom, and all goes swimmingly with the first two [human] students. But then the third student, a robot with a computer for a brain, says, “I don’t understand.” It seems my instructions were not clear enough. Now imagine another [human] student named Lila tells the robot helpfully, “Well first just tell us your name.” The robot still does not understand. Finally, Lila says, “What is your name?”
That works; the robot has been programmed with an algorithm instructing it to respond to “What is your name?” with the words, “My name is Feefee,” which the robot now says. Then Lila continues helping the robot by saying, “Now tell us where you’re from, Feefee.” Again the robot doesn’t get it. At this point, though, Lila has figured out what works in getting answers from this robot, so Lila says, “Where are you from?” This works; the robot has been programmed to respond to “Where are you from?” with the sentence, “I am from Neptune.”
In the above example, human intelligence was responsible for the robot’s successes and failures. The robot arrived with a few communication algorithms, programmed by its human developers. Feefee had not been taught enough to converse very naturally, however. Then Lila, a human, figured out how to get the right responses out of Feefee by modifying her human behavior to better match behavior Feefee had learned to respond to. Later, the students might all run home and say, “A robot participated in class today! It was amazing!” They might not even acknowledge the human participation that day, which the robot fully depended on.
Two reasons computers seem so smart today
What computers can do these days is amazing, for two main reasons. The first is cooperation from human software developers. The second is cooperation on the part of users.
First, computers seem so intelligent today because human software developers help one another teach computers. Apps that seem groundbreaking may simply include a lot of instructions. This is possible because developers have coded many, many algorithms, which they share and reuse on sites like Github. The more a developer is able to copy the basic steps others have already written for computers to follow, the more that developer can then focus on building new code that teaches computers new tricks. The most influential people known as “creators” or “inventors” in the tech world may be better described as “tweakers” who improved and added to other people’s code for their “creations” and “inventions.”
The second reason computers seem so smart today is because users are teaching them. Algorithms are increasingly designed to “learn” from human input. New algorithms automatically plug input into new programs, then automatically run those programs. This sequence of automated learning and application is called artificial intelligence (AI). AI essentially means teaching computers to teach themselves directly from their human users.
If only humans were always good teachers.
Teaching machines the best and worst about ourselves
One or more interactive elements has been excluded from this version of the text. You can view them online here: https://opentextbooks.library.arizona.edu/hrsmwinter2022/?p=64#oembed-1
In 2016, Microsoft introduced Tay, an AI online robot they branded as a young female. Their intention was for Tay to learn to communicate from internet users who conversed with her on Twitter – and learn she did. Within a few hours, Tay’s social media posts were so infected with violence, racism, sexism, and other bigotry that Microsoft had to take her down and apologize.
Microsoft had previously launched Xiaolce, an AI whose behavior remained far less offensive than TAY, on Chinese sites including the microblog Weibo. However, the Chinese sites Xiaolce learned from were heavily censored. The English-language Twitter was far less censored, and rife with trolls networked and ready to coordinate attacks. Developers and users who were paying attention already knew Twitter was full of hate.
Tay was an embarrassment for Microsoft in the eyes of many commentators. How could they not have predicted and protected her from bad human teachers? Why didn’t Tay’s human programmers teach her what not to say? It certainly involved a lack of research, since bots like @oliviataters have been more successful and even benefited from a shared list of banned words that could easily be added to their algorithms.
In addition to these oversights, Tay’s failure may also have been caused by a lack of diversity in Microsoft’s programmers and team leaders.
Programming and bias
Humans are at the heart of any computer program. Algorithms for computers to follow are all written in programming languages, which translate instructions from human language into the computing language of binary numerals, 0’s and 1’s. Algorithms and programs are selective and reflect personal decision-making. There are usually different ways they could have been written.
Computer programming languages like Python, C++, and Java are written in source code. Writing programs, sometimes just called “coding,” is an intermediary step between human language and the binary language that computers understand. Learning programming languages takes time and dedication. To learn to be a computer programmer, you either have to feel driven to teach yourself on your own equipment, or you have to be taught to program – and this is still not common in US schools.
Because computer programmers are self-selected this way, and because many people think of the typical tech geeks as white and male (as suggested by the Google Image search to the right), people who end up learning computer programming in the US are more likely to be white than any other race, and are more likely to identify as male than any other gender.
How can computers carry bias?
Many people think computers and algorithms are neutral – racism and sexism are not programmers’ problems. In the case of Tay’s programmers, this false belief enabled more hate speech online and led to the embarrassment of their employer. Human-crafted computer programs mediate nearly everything humans do today, and human responses are involved in many of those tasks. Considering the near-infinite extent to which algorithms and their activities are replicated, the presence of human biases is a devastating threat to computer-dependent societies in general and to those targeted or harmed by those biases in particular.
Problems like these are rampant in the tech industry because there is a damaging belief in US (and some other) societies that the development of computer technologies is antisocial, and that some kinds of people are better at it than others. As a result of this bias in tech industries and computing, there are not enough kinds of people working on tech development teams: not enough women, not enough people who are not white, not enough people who remember to think of children, not enough people who think socially.
Remember Google Glass? You may not; that product failed because few people wanted interaction with a computer to come between themselves and eye contact with humans and the world. People who fit the definition of “tech nerd” fell within this small demographic, but the sentiment was not shared by the broader community of technology users. Critics labeled the unfortunate people who did purchase the product as “glassholes.”
Case study: microcelebrity in the age of algorithms
Student Content
affordances of social media, our society has turned to using the platform for more selfish reasons- such as the fame granted when going viral.\n\nThere are some noticeable pros and cons that are intertwined with media spreadability. This term highlights how media is continually spread and then passed on to others, continuing the chain.\n\nAt this point, almost everyone has had exposure through the media. In terms of spreadability, exposure happens much more quickly. One second a video is posted and the next, it could have thousands of views. This was the case with now famous influencer, Emma Chamberlain.\n\nCurrently, Emma Chamberlain has accumulated an astounding total of nearly ten million subscribers and counting. At only 19, she has established a huge platform for herself and when she started as a young high school girl nearly three years ago, I can guarantee she had no idea what was in store for her in terms of success. As of now, she has won three awards for her Youtube career: a People's Choice Award, a Shorty Award, and lastly, a Teen Choice Award. Her breakout Youtube fame allowed her to then write her own book, create a podcast, and even make and sell merchandise. All of this success due to a few viral videos that skyrocketed a young girl's career. How did her videos spread so quickly? Was her content really that appealing to her audience? Did she face any backlash? What type of content tends to go viral? Chelsea Galvin is here to give her insight on these types of questions we all have.\n\nEmma Chamberlain was just an ordinary girl from Belmont, California. Who else is a teenage girl from Belmont, California? My roommate Chelsea Galvin. She was a primary witness in Emma Chamberlain's claim to fame. Both girls are from the same hometown, attended the same high school, and had the same classes.\n\nChelsea is no stranger to the realm of social media. She uses popular apps such as instagram, tiktok, and snapchat (her favorite as of now). She is familiar with the various different Algorthims that appear on her explorer and \"for you\" pages. She typically watches videos about house decor, food, and videos of friends just having fun.\n\nShe believes that Emma Chamberlain's content was relevant for teen girls today. Her content is \"different from mainstream media and what we usually see on youtube\" and is associated with certain algorithms that relate to young teens today. Ultimately, Emma Chamberlain became so well known for her unstaged and realistic content that she is now easily recognizable by so many people.\n\nAlthough having a presence in the media may seem extremely desirable, there are always obstacles and hardships that must be overcome. Cancel culture. Currently, this is a big part of having a media presence. Individuals must always be aware about what they post in order to avoid upsetting others, whether it is intentional or not.\n\nAs Chelsea describes it, \"a lot of attention brings a lot of people just wanting to hate or ruin things for people\" and I totally agree. We are all human and we all make mistakes but when those mistakes resurface online due to spreadability and a face in the media, those select individuals have a harder time than those who are not in the spotlight.\n\nUltimately anyone's content can spread and go viral, it is just a matter of time and good, relatable videos- like Emma Chamberlain posts. Both Chelsea and I believe that it is important to be educated on this topic, especially in times like this where social media plays a large role in our daily lives. We have both seen the more obstructive side of social media when content goes viral and we agree that it's vital to be prepared for the outcome.\n\nIt was a joy to chat with Chelsea and learn more about our perspectives on the media and what content is specific to our two feeds. I learned so much from her and her story and it really helped me conceptualize the term spreadability and how it occurs in reality."}">Gangnam Style. Nyan Cat. The Renegade. “Say So” Dance. The woman with super-glued hair. Baby Franklin. What do all of these infamous pop culture references or stars have in common? They all went viral online.
Viral. Meaning that millions of people saw this content and reposted or shared it with their friends, their families, and even the media. At some point in our lives, I am sure we have all pondered about how life would be if we were famous. Newfound fame- countless fans and followers, brand deals, being recognized in public. It all sounds great, right? Well, maybe not.
With the new surge of upcoming apps such as TikTok, along with Instagram, Snapchat, Twitter, and many other platforms, come more opportunities for different content and creators to spread. Our society is now so deeply rooted in the media with the ultimate hope to reap the benefit of being seen. Social media was primarily created to provide an outlet for friends and families to keep connected. While this is still one of the many affordances of social media, our society has turned to using the platform for more selfish reasons- such as the fame granted when going viral.
There are some noticeable pros and cons that are intertwined with media spreadability. This term highlights how media is continually spread and then passed on to others, continuing the chain.
At this point, almost everyone has had exposure through the media. In terms of spreadability, exposure happens much more quickly. One second a video is posted and the next, it could have thousands of views. This was the case with now-famous influencer, Emma Chamberlain.
Currently, Emma Chamberlain has accumulated an astounding total of nearly ten million subscribers and counting. At only 19, she has established a huge platform for herself and when she started as a young high school girl nearly three years ago, I can guarantee she had no idea what was in store for her in terms of success. As of now, she has won three awards for her Youtube career: a People’s Choice Award, a Shorty Award, and lastly, a Teen Choice Award. Her breakout Youtube fame allowed her to then write her own book, create a podcast, and even make and sell merchandise. All of this success due to a few viral videos that skyrocketed a young girl’s career. How did her videos spread so quickly? Was her content really that appealing to her audience? Did she face any backlash? What type of content tends to go viral? Chelsea Galvin is here to give her insight on these types of questions we all have.
Emma Chamberlain was just an ordinary girl from Belmont, California. Who else is a teenage girl from Belmont, California? My roommate C. She was a primary witness in Emma Chamberlain’s claim to fame. Both girls are from the same hometown, attended the same high school, and had the same classes.
C is no stranger to the realm of social media. She uses popular apps such as Instagram, TikTok, and Snapchat (her favorite as of now). She is familiar with the various different Algorthims that appear on her explorer and “for you” pages. She typically watches videos about house decor, food, and videos of friends just having fun.
She believes that Emma Chamberlain’s content was relevant for teen girls today. Her content is “different from mainstream media and what we usually see on youtube” and is associated with certain algorithms that relate to young teens today. Ultimately, Emma Chamberlain became so well known for her unstaged and realistic content that she is now easily recognizable by so many people.
Although having a presence in the media may seem extremely desirable, there are always obstacles and hardships that must be overcome. Cancel culture. Currently, this is a big part of having a media presence. Individuals must always be aware of what they post in order to avoid upsetting others, whether it is intentional or not.
As C describes it, “a lot of attention brings a lot of people just wanting to hate or ruin things for people” and I totally agree. We are all human and we all make mistakes but when those mistakes resurface online due to spreadability and a face in the media, those select individuals have a harder time than those who are not in the spotlight.
Ultimately anyone’s content can spread and go viral, it is just a matter of time and good, relatable videos – like Emma Chamberlain posts. Both C and I believe that it is important to be educated on this topic, especially in times like this where social media plays a large role in our daily lives. We have both seen the more obstructive side of social media when content goes viral and we agree that it’s vital to be prepared for the outcome.
It was a joy to chat with my roommate and learn more about our perspectives on the media and what content is specific to our two feeds. I learned so much from her and her story and it really helped me conceptualize the term spreadability and how it occurs in reality.
About the author
Lily was born and raised in Southern California, or more specifically, Pasadena, California. Her whole life she has been in the same area and absolutely loves the opportunities given to her while living in LA County. She has grown up with two brothers and two amazing dogs. Her favorite hobbies include exploring new cities, taking photos, trying new restaurants, going to the beach, and spending quality time with her loved ones. Currently, Lily is living in Tucson to further her education at the University of Arizona where she is studying Communication. She hopes to pursue event planning or advertising.
Respond to this case study… how might a creator change their content to affect a platform’s algorithm? How can creators and users learn more about the algorithms affecting them? How might platforms benefit from sharing more information about their algorithms? Why might they want to keep some things hidden from users?
Code: Debugging the Gender Gap
Created in 2015, the film Code: Debugging the Gender Gap encapsulates many of the biases in the history of the computing industry as well as their implications. Women have always been part of the US computing industry, and today that industry would collapse without engineers from diverse cultures. Yet there is widespread evidence that women and racial minorities have always been made to feel that they did not belong in the industry. And the numbers of engineers and others in tech development show a serious problem in Silicon Valley with racial and ethnic diversity, resulting in terrible tech decisions that spread racial and ethnic bias under the guise of tech neutrality. Google has made some headway in achieving a more diverse workforce, but not without backlash founded on bad science.
Below is the trailer for the film. The film is available through most University Libraries and outlets that rent and sell feature films, and through Finish Line Features.
Exacerbating Bias in Algorithms: The Three I’s
In its early years, the internet was viewed as a utopia, an ideal world that would permit a completely free flow of all available information to everyone, equally. John Perry Barlow’s 1996 Declaration of the Independence of Cyberspace represents this utopian vision, in which the internet liberates users from all biases and even from their own bodies (at which human biases are so often directed). Barlow’s utopian vision does not match the internet of today. Our social norms and inequalities accompany us across all the media and sites we use, and worsened in a climate where information value is determined by marketability and profit, as Sociologist Zeynep Tufecki explains in this Ted Talk.
Because algorithms are built on human cooperation with computing programs, human selectivity and human flaws are embedded within algorithms. Humans as users carry our own biases, and today there is particular concern that algorithms pick up and spread these biases to many, many others. They even make us more biased by hiding results that the algorithm calculates we may not like. When we get our news and information from social media, invisible algorithms consider our own biases and those of friends in our social networks to determine which new posts and stories to show us in search results and news feeds. The result for each user can be called their echo chamber or as author Eli Pariser describes it, a filter bubble in which we only see news and information we like and agree with, leading to political polarization.
Although algorithms can generate very sophisticated recommendations, algorithms do not make sophisticated decisions. When humans make poor decisions, they can rely on themselves or on other humans to recognize and reverse the error; at the very least, a human decision-maker can be held responsible. Human decision-making often takes time and critical reflection to implement, such as the writing of an approved ordinance into law. When algorithms are used in place of human decision-making, I describe what ensues as The three I's: Algorithms’ decisions become invisible, irreversible, and infinite. Most social media platforms and many organizations using algorithms will not share how their algorithms work; for this lack of transparency, they are known as black box algorithms.
Exposing Invisible Algorithms: Pro Publica
Journalists at Pro Publica are educating the public on what algorithms can do by explaining and testing black box algorithms. This work is particularly valuable because most algorithmic bias is hard to detect for small groups or individual human users. Studies like ProPublica’s presented in the “Breaking the Black Box” series (below) have been based on groups systematically testing algorithms from different machines, locations, and users. Using investigative journalism, Pro Publica has also found that algorithms used by law enforcement are significantly more likely to label African Americans as High Risk for reoffending and white Americans as Low Risk.
Fighting Unjust Algorithms
Algorithms are laden with errors. Some of these errors can be traced to the biases of those of developed them, as when a facial recognition system meant for global implementation is only trained using data sets from a limited population (say, predominantly white or male). Algorithms can become problematic when they are hacked by groups of users, like Microsoft’s Tay was. Algorithms are also grounded in the values of those who shape them, and these values may reward some involved while disenfranchising others.
Despite their flaws, algorithms are increasingly used in heavily consequential ways. They predict how likely a person is to commit a crime or default on a bank loan based on a given data set. They can target users with messages on social media that are customized to fit their interests, their voting preferences, or their fears. They can identify who is in photos online or in recordings of offline spaces.
Confronting the landscape of increasing algorithmic control is activism to limit the control of algorithms over human lives. Below, read about the work of the Algorithmic Justice League and other activists promoting bans on facial recognition. And consider: What roles might algorithms play in your life that may deserve more attention, scrutiny, and even activism?
The Algorithmic Justice League vs facial recognition tech in Boston
MIT Computer Scientist and “Poet of Code” Joy Buolamwini heads the Algorithmic Justice League, an organization making remarkable headway into fighting facial recognition technologies, whose work she explains in the first video below. On June 9th, 2020, Buolamwini and other computer scientists presented alongside citizens at Boston City Council meeting in support of a proposed ordinance banning facial recognition in public spaces in the city. Held and shared by live stream during COVID-19, footage of this meeting offers a remarkable look at the value of human advocacy in shaping the future of social technologies. The second video below should be cued to the beginning of Buolamwini’s testimony half an hour in. Boston’s City Council subsequently voted unanimously to ban facial recognition technologies by the city.
One or more interactive elements has been excluded from this version of the text. You can view them online here: https://opentextbooks.library.arizona.edu/hrsmwinter2022/?p=64#oembed-2
One or more interactive elements has been excluded from this version of the text. You can view them online here: https://opentextbooks.library.arizona.edu/hrsmwinter2022/?p=64#oembed-3
Core Concepts and Questions
Core Concepts
algorithm
a step-by-step set of instructions for getting something done to serve humans, whether that something is making a decision, solving a problem, or getting from point A to point B (or point Z)
why computers seem so smart today
cooperation from human software developers, and cooperation on the part of users
biases
assumptions about a person, culture, or population
filter bubble
a term coined by Eli Pariser, also called an echo chamber. A phenomenon in which we only see news and information we like and agree with, leading to political polarization
black box algorithms
the term used when processes created for computer-based decision-making is not shared with or made clear to outsiders
The three I's
algorithms’ decisions can become invisible, irreversible, and infinite
Core Questions
A. Questions for qualitative thought
- Write and/or draw an algorithm (or your best try at one) to perform an activity you wish you could automate. Doing the dishes? Taking an English test? It’s up to you.
- Often there are spaces online that make one feel like an outsider, or like an insider. Study an online space that makes you feel like one of these – how it that outsider or insider status being communicated to you, or to others?
- Consider the history of how you learned whatever you know about computing. This could mean how you came to understand key terms, searching online simple programs, coding, etc. Then, reinvent that history if you’d learned all you wish you knew about computing at the times and in the ways you feel you should have learned them.
B. Review: Let’s test how well you’ve been programmed. (Mark the best answers.)
An interactive H5P element has been excluded from this version of the text. You can view it online here:
https://opentextbooks.library.arizona.edu/hrsmwinter2022/?p=64#h5p-25
An interactive H5P element has been excluded from this version of the text. You can view it online here:
https://opentextbooks.library.arizona.edu/hrsmwinter2022/?p=64#h5p-26
An interactive H5P element has been excluded from this version of the text. You can view it online here:
https://opentextbooks.library.arizona.edu/hrsmwinter2022/?p=64#h5p-27
An interactive H5P element has been excluded from this version of the text. You can view it online here:
https://opentextbooks.library.arizona.edu/hrsmwinter2022/?p=64#h5p-28
Related Content
Hear It: Electronic Freedom Foundation’s “Algorithms for a Just Future”
https://player.simplecast.com/72c98d21-5c9a-44fa-ae90-c2adcd4d6766?dark=false
EPISODE SUMMARY
The United States already has laws against redlining, but companies can still use other data to advertise goods and services to you—which can have big implications for the prices you see.
EPISODE NOTES
One of the supposed promises of AI was that it would be able to take the bias out of human decisions, and maybe even lead to more equity in society. But the reality is that the errors of the past are embedded in the data of today, keeping prejudice and discrimination in. Pair that with surveillance capitalism, and what you get are algorithms that impact the way consumers are treated, from how much they pay for things, to what kinds of ads they are shown, to if a bank will even lend them money. But it doesn’t have to be that way, because the same techniques that prey on people can lift them up. Vinhcent Le from the Greenlining Institute joins Cindy and Danny to talk about how AI can be used to make things easier for people who need a break. In this episode you’ll learn about:
-
- Redlining—the pernicious system that denies historically marginalized people access to loans and financial services—and how modern civil rights laws have attempted to ban this practice.
- How the vast amount of our data collected through modern technology, especially browsing the Web, is often used to target consumers for products, and in effect recreates the illegal practice of redlining.
- The weaknesses of the consent-based models for safeguarding consumer privacy, which often mean that people are unknowingly waving away their privacy whenever they agree to a website’s terms of service.
- How the United States currently has an insufficient patchwork of state laws that guard different types of data, and how a federal privacy law is needed to set a floor for basic privacy protections.
- How we might reimagine machine learning as a tool that actively helps us root out and combat bias in consumer-facing financial services and pricing, rather than exacerbating those problems.
- The importance of transparency in the algorithms that make decisions about our lives.
- How we might create technology to help consumers better understand the government services available to them.
This podcast is supported by the Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. This work is licensed under a Creative Commons Attribution 4.0 International License.
Additional music is used under creative commons license from CCMixter includes:
http://dig.ccmixter.org/files/djlang59/37792
Drops of H2O ( The Filtered Water Treatment ) by J.Lang (c) copyright 2012 Licensed under a Creative Commons Attribution (3.0) license.http://dig.ccmixter.org/files/djlang59/37792 Ft: Airtone
http://dig.ccmixter.org/files/zep_hurme/59681
Come Inside by Zep Hurme (c) copyright 2019 Licensed under a Creative Commons Attribution (3.0) license. http://dig.ccmixter.org/files/zep_hurme/59681 Ft: snowflake
http://dig.ccmixter.org/files/admiralbob77/59533
Warm Vacuum Tube by Admiral Bob (c) copyright 2019 Licensed under a Creative Commons Attribution (3.0) license. http://dig.ccmixter.org/files/admiralbob77/59533 Ft: starfrosch
http://dig.ccmixter.org/files/airtone/59721
reCreation by airtone (c) copyright 2019 Licensed under a Creative Commons Attribution (3.0) license. http://dig.ccmixter.org/files/airtone/59721
Read it: Do social media algorithms erode our ability to make decisions freely? The jury is out
Lewis Mitchell and James Bagrow, University of Vermont
Social media algorithms, artificial intelligence, and our own genetics are among the factors influencing us beyond our awareness. This raises an ancient question: do we have control over our own lives? This article is part of The Conversation’s series on the science of free will.
Have you ever watched a video or movie because YouTube or Netflix recommended it to you? Or added a friend on Facebook from the list of “people you may know”?
And how does Twitter decide which tweets to show you at the top of your feed?
These platforms are driven by algorithms, which rank and recommend content for us based on our data.
As Woodrow Hartzog, a professor of law and computer science at Northeastern University, Boston, explains:
If you want to know when social media companies are trying to manipulate you into disclosing information or engaging more, the answer is always.
So if we are making decisions based on what’s shown to us by these algorithms, what does that mean for our ability to make decisions freely?
What we see is tailored for us
An algorithm is a digital recipe: a list of rules for achieving an outcome, using a set of ingredients. Usually, for tech companies, that outcome is to make money by convincing us to buy something or keeping us scrolling in order to show us more advertisements.
The ingredients used are the data we provide through our actions online – knowingly or otherwise. Every time you like a post, watch a video, or buy something, you provide data that can be used to make predictions about your next move.
These algorithms can influence us, even if we’re not aware of it. As the New York Times’ Rabbit Hole podcast explores, YouTube’s recommendation algorithms can drive viewers to increasingly extreme content, potentially leading to online radicalisation.
Facebook’s News Feed algorithm ranks content to keep us engaged on the platform. It can produce a phenomenon called “emotional contagion”, in which seeing positive posts leads us to write positive posts ourselves, and seeing negative posts means we’re more likely to craft negative posts — though this study was controversial partially because the effect sizes were small.
Also, so-called “dark patterns” are designed to trick us into sharing more, or spending more on websites like Amazon. These are tricks of website design such as hiding the unsubscribe button, or showing how many people are buying the product you’re looking at right now. They subconsciously nudge you towards actions the site would like you to take.
You are being profiled
Cambridge Analytica, the company involved in the largest known Facebook data leak to date, claimed to be able to profile your psychology based on your “likes”. These profiles could then be used to target you with political advertising.
“Cookies” are small pieces of data which track us across websites. They are records of actions you’ve taken online (such as links clicked and pages visited) that are stored in the browser. When they are combined with data from multiple sources including from large-scale hacks, this is known as “data enrichment”. It can link our personal data like email addresses to other information such as our education level.
These data are regularly used by tech companies like Amazon, Facebook, and others to build profiles of us and predict our future behaviour.
You are being predicted
So, how much of your behaviour can be predicted by algorithms based on your data?
Our research, published in Nature Human Behaviour last year, explored this question by looking at how much information about you is contained in the posts your friends make on social media.
Using data from Twitter, we estimated how predictable peoples’ tweets were, using only the data from their friends. We found data from eight or nine friends was enough to be able to predict someone’s tweets just as well as if we had downloaded them directly (well over 50% accuracy, see graph below). Indeed, 95% of the potential predictive accuracy that a machine learning algorithm might achieve is obtainable just from friends’ data.
Our results mean that even if you #DeleteFacebook (which trended after the Cambridge Analytica scandal in 2018), you may still be able to be profiled, due to the social ties that remain. And that’s before we consider the things about Facebook that make it so difficult to delete anyway.
We also found it’s possible to build profiles of non-users — so-called “shadow profiles” — based on their contacts who are on the platform. Even if you have never used Facebook, if your friends do, there is the possibility a shadow profile could be built of you.
On social media platforms like Facebook and Twitter, privacy is no longer tied to the individual, but to the network as a whole.
No more free will? Not quite
But all hope is not lost. If you do delete your account, the information contained in your social ties with friends grows stale over time. We found predictability gradually declines to a low level, so your privacy and anonymity will eventually return.
While it may seem like algorithms are eroding our ability to think for ourselves, it’s not necessarily the case. The evidence on the effectiveness of psychological profiling to influence voters is thin.
Most importantly, when it comes to the role of people versus algorithms in things like spreading (mis)information, people are just as important. On Facebook, the extent of your exposure to diverse points of view is more closely related to your social groupings than to the way News Feed presents you with content. And on Twitter, while “fake news” may spread faster than facts, it is primarily people who spread it, rather than bots.
Of course, content creators exploit social media platforms’ algorithms to promote content, on YouTube, Reddit and other platforms, not just the other way round.
At the end of the day, underneath all the algorithms are people. And we influence the algorithms just as much as they may influence us.
Lewis Mitchell, Senior Lecturer in Applied Mathematics and James Bagrow, Associate Professor, Mathematics & Statistics, University of Vermont
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Media Attributions
- OA_image-5fd085054d2f2 © Omar Amanullah adapted by Emily Gammons is licensed under a CC BY (Attribution) license
- Movie_algorithm.svg © Jonathan is licensed under a Public Domain license
- Robot © DrSJS is licensed under a Public Domain license
- Source code of a simple computer program © Esquivalience is licensed under a CC0 (Creative Commons Zero) license
- Programming_language
- Tech_geek_7-21_1 is licensed under a Public Domain license
- 640px-A_Google_Glass_wearer © Loïc Le Meur is licensed under a CC BY (Attribution) license