The prevailing media credo, in domains that matter both a lot (popular, capitalist, and state discourse and action) and a little (communication, cultural, and media studies), is upheaval. The litany goes something like this: Corporate power is challenged. State authority is compromised. Avant-garde art and politics are centered. The young are masters, not victims. Technologies represent freedom, not domination. Revolutions are fomented by Twitter, not theory; by memes, not memos; by Facebook, not Foucault; by phone, not protest.
Political participation is just a click away. Tweets are the new streets and online friends the new vanguard, as 140ism displaces Maoism. Cadres are created and destroyed via BlackBerry. Teens tease technocrats. Hackers undermine hierarchy. Leakers dowse the fire of spies and illuminate the shady world of diplomats.
The endless iterations offered by digital reproduction and the immediate exchanges promised by the Internet have turned the world on its head. We are advised that the media in particular are being transformed. Tradition is rent asunder. Newspapers are metaphorically tossed aside. What was once their fate in a literal sense (when we dispensed with print in poubelles) is now a figure of speech that refers to their financial decline. Journalists are recycled as public relations people, and readers become the new journalists. Cinema is irrelevant, TV is on the way out, gaming is the future, telephony is timeless, and the entire panoply of scholarship on the political economy of ownership and control is of archaeological interest at best.
This technophilic vision of old and middle-aged media being shunted aside by new media is espoused by a wide variety of actors. The corporate world is signed up: Netflix proudly proclaims that “Internet TV is replacing linear TV. Apps are replacing channels, remote controls are disappearing, and screens are proliferating.”1 IBM disparages “Massive Passives . . . in the living room . . . a ‘lean back’ mode in which consumers do little more than flip on the remote and scan programming.” By contrast, it valorizes and desires “Gadgetiers and Kool Kids” who “force radical change” because they demand “anywhere, anytime content.”2 I wish someone would pay me to come up with lines like those.
The state loves this new world too, despite the risks allegedly posed to its own essence. Let’s drop in on a Pentagon web site to see it share the joy: “Take the world’s most powerful sea, air and land force with you wherever you go with the new America’s Navy iPhone app. Read the latest articles. See the newest pics and videos. And learn more about the Navy—from its vessels and weapons to its global activities. You can do it all right on your iPhone—and then share what you like with friends via your favorite social media venues.”3
Civil society is also excited. The wonderfully named Progress & Freedom Foundation’s “Magna Carta for the Information Age” proposes that the political-economic gains made through democratic action since the thirteenth century have been eclipsed by technological ones: “The central event of the 20th century is the overthrow of matter. In technology, economics, and the politics of nations, wealth—in the form of physical resources—has been losing value and significance. The powers of mind are everywhere ascendant over the brute force of things.”4
The foundation has closed its doors, no doubt overtaken by pesky progress, but its discourse of liberty still rings loudly in our ears. Meanwhile, a prominent international environmental organization surveys me about its methods and appeal, asking whether I am prepared to sign petitions and embark on actions under its direction that might lead to my arrest. I prefer cozily comfortable middle-aged clicking to infantile attention-seeking incarceration, but either way, twinning the two is a telling sign of the times—as is doing so via corporate marketing techniques.
Even the bourgeois media take a certain pride in pronouncing their end of days. On the liberal left, the Guardian is prey to this beguiling magic: someone called “You” heads its 2013 list of the hundred most important folks in the media, with unknowns like Rupert Murdoch lagging far behind.5 Time magazine exemplified just such love of a seemingly immaterial world when it chose “You” as “Person of the Year” for 2006 because “You control the Information Age. Welcome to your world.”6 For its part, the New Statesman, a progressive British weekly, heralds the new epoch in a nationalistic way: “Our economic and political clout wanes,” but “when it comes to culture, we remain a superpower” because popular culture provides “critical tools through which Britain can market itself and its ideas to the world.”7
Many academics love this new age too, not least because it’s avowedly green: the Australian Council for the Humanities, Arts and Social Sciences informs the country’s Productivity Commission that we dwell in a “post-smokestack era”8 —a blessed world for workers, consumers, and residents, with residues of code rather than carbon.9
The illustrations gathered above—arbitrarily selected but emblematic of profound tendencies across theories, industries, and places—amount to a touching but maddening mythology: cybertarianism, the belief that new media technologies are obliterating geography, sovereignty, and hierarchy in an alchemy of truth and beauty. Cybertarianism promises libertarian ideals and forms of life made real and whole thanks to the innately individualistic and iconoclastic nature of the newer media.10
In this cybertarian world, corporate and governmental cultural gatekeepers and hegemons are allegedly undermined by innovative possibilities of creation and distribution. The comparatively cheap and easy access to making and circulating meaning afforded by Internet media and genres is thought to have eroded the one-way hold on culture that saw a small segment of the world as producers and the larger one as consumers, even as it makes for a cleaner economy that glides into an ever-greener postindustrialism. Cybertarians celebrate their belief that new technologies allow us all to become simultaneously cultural consumers and producers—no more factory conditions, no more factory emissions.11
Crucial to these fantasies is the idea of the prosumer. This concept was invented by Alvin Toffler, a lapsed leftist and Reaganite signatory to the Progress & Freedom Foundation’s “Magna Carta.” Toffler was one of a merry band of futurists who emerged in the 1970s. He coined the term prosumer in 1980 to describe the vanguard class of a technologized future. (Toffler had a nifty knack for knee-jerk neologisms, as we will see.)12
Rather than being entirely new, the prosumer partially represented a return to subsistence, to the period prior to the Industrial Revolution’s division of labor—a time when we ate what we grew, built our own shelters, and gave birth without medicine. The specialization of agriculture and manufacturing and the rise of cities put an end to such autarky: the emergence of capitalism distinguished production from consumption via markets. But Toffler discerned a paradoxical latter-day blend of the two seemingly opposed eras, symbolized by the French invention and marketing of home pregnancy tests in the 1970s. These kits relied on the formal knowledge, manufacture, and distribution that typified modern life but permitted customers to make their own diagnoses, cutting out the role of doctors as expert gatekeepers between applied science and the self.
Toffler called this “production for self-use.” He saw it at play elsewhere as well: in the vast array of civil society organizations that emerged at the time, the craze for “self-help,” the popularity of self-serve gas stations as franchises struggled to survive after the 1973–74 oil crisis, and the proliferation of automatic teller machines as banks sought to reduce their retail labor force.
The argument Toffler made thirty-five years ago—that we are simultaneously cultural consumers and producers, that is, prosumers—is an idea whose time has come, as his fellow reactionary Victor Hugo almost put it.13 Readers become authors. Listeners transform into speakers. Viewers emerge as stars. Fans are academics. Zine writers are screenwriters. Bloggers are copywriters. Children are columnists. Bus riders are journalists. Coca-Cola hires African Americans to drive through the inner city selling soda and playing hip-hop. AT&T pays San Francisco buskers to mention the company in their songs. Urban performance poets rhyme about Nissan cars for cash, simultaneously hawking, entertaining, and researching. Subway’s sandwich commercials are marketed as made by teenagers. Cultural studies majors turn into designers. Graduate students in New York and Los Angeles read scripts for producers, then pronounce on whether they tap into the zeitgeist. Internally divided—but happily so—each person is, as Foucault put it forty years ago, “a consumer on the one hand, but . . . also a producer.”14
Along the way, all that seemed scholarly has melted into the air. Bitcoin and Baudrillard, creativity and carnival, heteroglossia and heterotopia—they’re all present but simultaneously theorized and realized by screen-based activists rather than academics. Vapid victims of ideology are now credible creators of meaning, and active audiences are neither active nor audiences—their uses and gratifications come from sitting back and enjoying the career of their own content, not from viewing others’. They resist authority not via aberrant decoding of texts that have been generated by professionals, but by ignoring such things in favor of making and watching their own.
Whether scholars like to attach electrodes to peoples’ naughty bits to establish whether porn turns them on, or interview afternoon TV viewers to discern progressive political tendencies in their interpretation of courtroom shows, they’re yesterday’s people. It doesn’t matter if they purvey rats and stats and are consummate quantoids, or eschew that in favor of populist authenticity as acafans and credulous qualtoids. Their day has passed. “Media effects” describes what people do to the media, not the other way round.
People in all spheres of scholarship say “my children” enjoy this, that, or the other by way of media use. These choices are held up as predictions of the future. No one says the same about, for example, their children’s food preferences, as if abjuring vegetables at age seven will be a lifetime activity. But when it comes to the media, children are mini-Tofflers, forecasters of a world they are also bringing into being.
Like Toffler all those decades ago, cybertarian discourse buys into individualistic fantasies of reader, audience, consumer, and player autonomy—the neoliberal intellectual’s wet dream of music, movies, television, and everything else converging under the sign of empowered and creative fans. The New Right of communication and cultural studies invests with unparalleled gusto in Schumpeterian entrepreneurs, evolutionary economics, and creative industries. It’s never seen an “app” it didn’t like or a socialist idea it did. Faith in devolved media-making amounts to a secular religion, offering transcendence in the here and now via a “literature of the eighth day, the day after Genesis.”15 This is narcissography at work, with the critic’s persona a guarantor of assumed audience revelry and Dionysian joy. Welcome to “Readers’ Liberation Movement” media studies.16
So strong a utopian line about digital technologies and the Internet is appealing in its totality, its tonality, its claims, its cadres, its populism, its popularity, its happiness, and its hopefulness. But such utopianism has seen a comprehensive turn away from addressing unequal infrastructural and cultural exchange, toward an extended dalliance with new technology and its supposedly innate capacity to endow users with transcendence.17 In 2011, the cost of broadband in the Global South was 40.3 percent of the average individual gross national income (GNI). Across the Global North, by comparison, the price was less than 5 percent of GNI per capita.18 Within Latin America, for example, there are major disparities in pricing. One megabit a second in Mexico costs US$9, or 1 percent of average monthly income; in Bolivia, it is US$63, or 31 percent. Access is also structured unequally in terms of race, occupation, and region: indigenous people represent a third of rural workers in Latin America, and over half in some countries are essentially disconnected. The digital divide between indigenous people and the rest of the population in Mexico is 0.3, in Panama 0.7, and Venezuela 0.6.19 Rather than seeing new communications technologies as magical agents that can produce market equilibrium and hence individual and collective happiness, we should note their continued exclusivity.
It is also worth noting that there are anticybertarian skeptics aplenty in both public intellectual and cloistered worlds and the third sector. They offer ways of thinking that differ from the dominant ones. Consider Evgeny Morozov’s striking journalistic critiques, which have resonated powerfully in their refusal of technocentric claims for social change.20 On more scholarly tracks, many authors have done ethnographic and political-economic work on the labor conditions experienced by people in the prosumer world as well as policy explorations of digital capitalism and the state.21 Case studies of WikiLeaks, for instance, show the ambivalent and ambiguous sides to a phenomenon that has been uncritically welcomed by cybertarians, while we now know the extent of corporate surveillance enabled by their embrace of Facebook and friends.22 Beyond the Global North, thick descriptions of technocentric, cybertarian exploitation and mystification proliferate as the reality of successive liberatory “springs” supposedly unleashed by social media networks is exposed.23 And nongovernment organizations raise the flag against crass celebrations of new media technologies that damage workers and the environment.24 This array of work provides a sturdy counterdiscourse to the admittedly still dominant cybertarian position.