2.6: What If I’m Not Sure About a Source’s Reliability?
Authority and reliability are tricky to evaluate. Whether we admit it or not, most of us would like to ascribe authority to sites and authors who seem to support our viewpoints and approach publications that disagree with our worldview with skepticism.
How do we escape our own prejudices? Try applying Wikipedia’s guidelines for determining the reliability of publications. These guidelines were developed to help people with diametrically opposed positions argue in dispassionate ways about the reliability of sources using common criteria. For Wikipedians, reliable sources are defined by process , expertise, and aim .
Process
Above all, a reliable source for facts should have a process in place for encouraging accuracy, verifying facts, and correcting mistakes. Note that reputation and process might be apart from issues of bias. The editorial pages of the New York Times have a center-left bias, while those of the Wall Street Journal a center-right bias. The stories they choose to cover are also influenced by editors deciding what is important for their readership and role. Yet fact-checkers of all political stripes are happy to be able to track a fact down to one of these publications since they have reputations for a high degree of accuracy and issue corrections when they get facts wrong.
The same thing applies to peer-reviewed publications. While there is much debate about the inherent flaws of peer review, peer review does mean there are many eyes on data and results. This process helps to keep many obviously flawed results out of publication. If a peer-reviewed journal has a large following of experts, that provides even more eyes on the article, and more chances to spot flaws. Since one’s reputation for research is on the line in front of one’s peers, it also provides incentives to be precise in claims and careful in analysis in a way that other forms of communication might not.
Expertise
According to Wikipedians, researchers and certain classes of professionals have expertise, and their usefulness is defined by that expertise. For example, we would expect a marine biologist to have a more informed opinion about the impact of global warming on marine life than the average person, particularly if the biologist has done research in that area. Professional knowledge matters too: we’d expect a health inspector to have a reasonably good knowledge of health code violations, even if they are not a published scholar of the area. And while we often think researchers are more knowledgeable than professionals, this is not always the case. For a range of issues, professionals in a given area might have more nuanced and up-to-date insight than many researchers, especially where question deal with common practice.
Reporters, on the other hand, often have no domain expertise, but may strive to accurately summarize and convey the views of experts, professionals, and event participants. Reporters who write in a niche area (their “beat”) over many years (e.g. science or education policy) may acquire expertise themselves. Nevertheless, they will seek out experts for information when working on a story.
Aim
Aim is defined by what the publication, author, or media source is attempting to accomplish. Aims are complex. Respected scientific journals, for example, aim for prestige within the scientific community by publishing important new research, but must also have a business model to fund their publishing operation. The New York Times relies on subscriptions and ad revenue but is also dependent on maintaining a reputation for accuracy and even-handedness, so it maintains an organizational separation between the staff whose job is to bring in money and the editors and journalists whose job is to report news.
One way to think about aim is to ask what incentives an article or author has to get things right. An opinion column that gets a fact or two wrong won’t cause its author much trouble, whereas an article in a newspaper that gets facts wrong may damage the reputation of the reporter. On the far ends of the spectrum, a single bad or retracted article by a scientist can ruin a career, whereas an advocacy blog site or a YouTube celebrity can twist facts daily with few consequences.
Policy think tanks, such as the Cato Institute and the Center for American Progress, are interesting hybrid cases. To maintain their funding, they must continue to promote aims that have a particular bias. At the same time, their prestige depends on them promoting these aims (and being clear about their mission) while maintaining some level of factual integrity.
Bottom line: look for publications that have strong incentives to get things right, as shown by both authorial intent and business model, reputational incentives, and mission.