Seeding Mass Confusion
The strong resurgence of the term in recent years has been led largely (but not exclusively) by conservative commentators. It has been used most vociferously (and effectively) by former U.S. President Donald Trump who, in 2018, awarded what he called the "fake news award" to traditional U.S. media outlets.
Trump’s repeated claims that major news outlets lied about numerous aspects of his political and personal lives even as he made a range of demonstrably false claims at an unprecedented rate (for a high-ranking elected leader) has been linked to the notion of gaslighting. As media scholar Caroline Jack argues, this rhetorical and psychological strategy relies on the intentional orchestration of deceptions and biased narrations to not only confuse individuals but further distort audiences' trust in their own perceptions and memories. The term "gaslighting" is also not new — it has been traced to a 1938 theatrical play — but it is useful in conceptualizing attempts by political actors to use misdirection, denial, and disinformation to help sow confusion and undermine trust in institutions.
More broadly, the use of systematic campaigns to confuse the public and undermine trust in institutions has occurred multiple times throughout history and across different international contexts. (These are different from propaganda, which is a more common effort to strategically use information to increase trust in institutions or build support for (or against) a cause.) For example, the former Soviet Union used the term dezinformatsiya to conceptualize coordinated state efforts to disseminate false or misleading information to journalistic media (among other forms of media) in targeted countries or regions. This was just one of their activnye meropriyatiya, or 'active measures,' employed by the state to strategically undermine and disrupt governance by opposing nation-states while strengthening the positions of allies. These measures included spreading disinformation through multiple channels (e.g., through fake grassroots campaigns, a practice also known as astroturfing) to widen existing domestic rifts, stoke existing tensions, and complicate international relations.
More recently, scholars have used the term xuanchuan (a nod to an existing Chinese term) to describe the use of coordinated posts on social media to flood conversational spaces with a mix of positive messages, negative messages, and attempts to change the subject as part of a broader misdirection strategy. Under this approach, the goal is not to simply promote false information but rather to overwhelm the system with information, making it harder for individuals to come across certain kinds of information. For example, analysts have pointed to China’s so-called "50 Cent Army" (or "50 Cent Party") — groups of online commentators thought to number in the millions who are regularly employed by Chinese authorities — as an example of the mobilization of large groups to systematically promote echo chambers, hijack hashtags, and steer public discourse away from sensitive topics.
It is important to note that although they can be useful in capturing specific approaches to seeding mass confusion, terms like 'dezinformatsiya' and 'xuanchuan' can also promote negative stereotypes and limit conversation. For example, there are also related non-state efforts to disrupt specific social campaigns, as when K-Pop fans banded together to hijack hashtags used to coordinate white supremacist activity. These terms should thus be used with care due to the cultural associations they elicit. Easier and cheaper access to powerful computers and high-speed internet connections have made it easier for individuals and small teams around the world to automate the production and amplification of disinformation in digital environments.
The resurgence of the term "fake news" and high-profile, coordinated disinformation campaigns have helped promote a rise in civic and governmental attempts to counter online misinformation and disinformation. In particular, several fact-checking organizations have emerged in recent years. These organizations aim to authenticate statements made by institutional sources (e.g., elected leaders), debunk social media hoaxes, and assess the legitimacy of particular information sources. However, several scholars have found that such interventions have made little headway in combating large-scale disinformation campaigns or restoring trust in journalistic institutions. Thus, journalistic outlets are still seeking effective solutions to countering disinformation, all the while struggling to adapt to a fast-paced environment that makes it easier for them to produce misinformation themselves.