Skip to main content
Social Sci LibreTexts

10.3: Measuring Public Opinion

  • Page ID
    135876
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Learning Objectives

    By the end of this section, you will be able to:

    • Remember how public opinion is measured
    • Understand how individual measures of public opinion affect aggregate measures of public opinion
    • Apply principles of measurement to a public opinion survey

    Public Opinion Polls

    Researchers and consultants use a variety of techniques when trying to gauge public opinion, but the most common tool used are public opinion polls. A public opinion poll is a random sample of subjects from a broader pool of citizens who are interviewed and whose answers are used to make inferences on that larger body. In other words, by interviewing a smaller sub-sample of a population we can make reasonable guesses on what the larger population believes.

    Imagine making a large pot of spaghetti sauce. But you want to test the sauce before you serve it to others. So, by tasting a sample you can make a reasonable guess what the entire batch of sauce tastes like. In order for this to work, the small spoon you use to test the sauce must contain all the ingredients and seasoning at the same proportions of the larger pot of sauce. It’s the same dynamic in polling (NBC News Learn 2020). If we want to gauge what the public believes on a set of issues, then our sample must include all the different combinations of demographics and regional influences in that larger body.

    How does a smaller sample truly represent the larger public? The sample must be representative, or have all the same features and elements at the same proportions of the larger body. To achieve this, researchers and political scientists use randomization when choosing respondents. Randomization in this case is when everyone in the larger population has an equal chance of being chosen for the smaller representative sample.

    Imagine, if you had a perfectly weighted six-sided die. If you rolled it six times, odds are you will not get one of each number. If you rolled the dice sixty times, it’s highly unlikely you will get 10 of each number (an equal distribution of each side of the die), but you’ll probably get at least a couple of rolls with each number. If you rolled the dice six-hundred times you’ll probably not get 100 of each number, but you’ll get closer to 100 for each number than you would get 10 for each number if you rolled it only 60 times. And if you rolled the dice six-thousand times you would get even closer to that equal distribution than if you rolled it six-hundred or just sixty times. In other words, the more times you roll the die, the more likely you are to have an equal distribution of each number.

    Even when following the laws of randomization, all public polls have a margin of error, that is a statistical estimation of the accuracy of your sample. Getting back to the sauce analogy, if you use a larger spoon to sample your spaghetti sauce, you’ll likely get more of all the active ingredients than if you use a smaller spoon. So, in this case the larger spoon will have a lower margin of error than the smaller spoon. In other words, statistically, the larger random sample is more likely to be accurate than the smaller one.

    When reading polls, this number is represented by using a “+/-” classification. So, if the poll claims that 45% of the public enjoys a particular beverage, and the margin of error is “+/- 5%”, that means the polling is really claiming that somewhere between 40-50% enjoy that particular drink (we simply added and subtracted 5% from 45%). How do you know if the actual number is 40% and not 42%, or 45%? The short answer is we do not. If we get a larger sample we may be able to reduce our margin of error and get a more precise snapshot, but that’s about the only alternative. Public opinion polling is a tool designed to estimate the public’s view. As long as one knows the limitations of this tool then polls can be a valuable technique to gauge opinion.

    Another way to measure public opinion is by using focus groups (Morgan 1996). Whereas polls give us a good idea of what the broader public feels or thinks, it does not really reveal the real life dynamic of how opinions are shaped or shared. Have you ever been engaged in a conversation where the person talking to you changes your mind? Or where, you simply just agree with those around you because you really do not care about the issue? Or perhaps, you felt strongly about an issue and then were exposed to more information (like in a news article, podcast, or television advertisement) then changed your mind?

    Focus groups are a small subset of individuals that are exposed to a treatment of some kind and then are asked about their impressions of that treatment. When, though, asking their impressions others are allowed to inject their opinions as well, and more real-life interaction can organically follow. Focus groups are a wonderful tool to see how opinions can be formed or how dominant personalities can influence those around them, but can not be generalized to the public at large. Nevertheless, they are a good tool to try to understand how an individual would react to a set of stimuli.

    Modes of Contact and Types of Polls

    The best scientific polls are usually done over the phone by Random Digit Dial (RDD). Random digit dial polls are good for a variety of reasons. You can use computers to make a lot of contacts within a short period of time, and as a result, have more accurate findings. If it takes too long to complete the poll, those who responded early on may have changed their mind by the time the poll finished. This also can become problematic if the poll is taken during an election with numerous candidates and one decides to drop out. In addition, researchers and pollsters can use the computer to randomly dial numbers which is one of the better ways to try to achieve a random sample. The biggest knock is that the poll biases against individuals who do not have phones, primarily only using cell phones. RDD polls can also be expensive (think of the hours it would take to call the thousands of respondents to complete five-hundred to thousand interviews), but is still more affordable than hiring people to go door to door. Although random digit dial is not perfect, it is better than some of the cheaper alternatives in a variety of ways.

    On-line polls have been used with greater frequency as technology has evolved, and it is becoming easier through on-line websites such as Survey Monkey for anyone to create a poll and do an email blast. Potential problems, though, will quickly manifest. Many people are skeptical of doing anything on-line if they do not know the source. And technology has allowed us to screen out anonymous calls and emails (i.e. caller ID’s and SPAM folders). The lower the response rate in any poll (the percentage of contacts who complete the survey) means any assumptions you make about your targeted population becomes less accurate, because we cannot say with certainty that the population who answers the poll is different than those who do.

    On-line polls can be quite useful in collecting information in very specific circumstances, however. For example, if a business uses an internal on-line poll to gauge attitudes within that company, but in these cases the respondents already know the poll is coming before they complete it. Some pollsters may offer financial incentives (like a gift card or a lottery drawing) to improve their response rate, but doing something like that will, again, create more bias because we can assume those who are likely to complete the poll because of that incentive are more likely to need it.

    There are a variety of polls, some that use either RDD’s or on-line, and others who may use a combination. Tracking Polls is a common tool used by researchers and companies. It’s often used to measure approval ratings among public officials. Tracking polls collect a sample over the period of a few days (typically 3-7), and use a rolling sample. Contacts are made every day, and the new contacts are continually added to the sample while the older contacts are taken out. So, for a tracking poll, it’s more of a useful tool to look at the trajectory of attitudes than any single snapshot.

    Exit Polls are conducted on election day. As individuals who voted exit from the polling location, poll workers interview respondents after they have voted. This data is good because we are asking individuals who have actually voted who they voted for, opposed to asking someone who they plan to vote for. The short-coming is that these polls are conducted earlier in the day (which may bias who is being interviewed) and uses a tactic called systematic randomization (randomly choosing a single respondent and interviewing every third, fourth, or some other fixed number after). This tactic is much harder to consistently follow than if one was conducting a cold call.

    Push Polls are those designed to provide information under the guise of measuring someone’s opinion. Campaigns frequently use these to try to build false enthusiasm. Embedded in the poll is information the respondent may not know about the candidate. For example, imagine a series of questions like this: Question 1: Were you aware that candidate A once saved the life of a child from a burning building? Question 2: Were you aware that candidate B is under investigation for failing to pay child support? Question 3: If the election were tomorrow, would you vote for candidate A or B? As you can see, the question ordering is designed to push the respondent to answer a certain way.

    Straw Polls are often used at events or conventions to gauge the preferences of those who attended. The problem with these polls is that attendees are normally attracted to these events to see a candidate, so unless they hear something at the event that changes their mind, they will choose who attracted them there. Imagine doing a poll at a Los Angeles Laker game asking those who attended who their favorite basketball player is. The results you get at that game will be completely different than if you asked the same question among basketball fans nationwide.

    Finally, one needs to be cautious about using data from a non-scientific poll. The classic example of the erroneous results of a non-scientific poll manifested in the false ‘Dewey Defeats Truman’ headline based on the unscientific Readers Digest Poll during the 1946 presidential election. We often see these polls on partisan or news websites. These polls are not random, and those who complete the polls want to complete the poll. As a result, you can see results that are cartoonishly skewed. For example, let’s say a conservative news network conducts a poll asking their viewers who won a debate. And the results show 90% of the respondents say the Republican candidate won. The problem is those who visit the website are much more likely to be Republican voters than not. So of course, they’ll choose the Republican candidate.

    Problems with Polls

    No matter how much a researcher or pollster effectively achieves randomization of their sample, they all have to be concerned with potential pitfalls in how their poll is presented. One has to be conscious of priming effects. Priming effects are having a respondent to think about a certain subject matter they would not normally be thinking about or thinking about at that time. Recent studies have confirmed priming effects do occur (Lenz 2019). If I contact a respondent and ask them if they prefer their hamburgers from Whataburger or In-and-Out. Then after they give you the response, you ask them what they plan to eat for dinner this evening. And shockingly, they respond, “hamburger”. They of course could be telling you the truth, but would they have planned to eat a burger for dinner, if you didn’t first get them thinking about hamburgers by asking them the Whataburger/In-and-Out question first?

    Another type of effect that can skew results is framing (Nelson and Oxley 1999). Framing effects are those that influence a respondent by how a question is presented. There are equally acceptable ways to ask questions, but the word usage one uses could present the issue in a different light. For example, imagine reading a newspaper article on the struggles of undocumented immigrants. Then imagine reading an identical article, but instead of it using the term “undocumented”, it uses “illegal”. The tone of the entire article completely changes.

    When asking respondents about policy preferences, how language is articulated could influence their responses. For example, one could use humanizing language to refer to groups or a policy outcome, or conversely, one could use technical jargon or dehumanizing language when describing groups or policy outcomes. Trying to minimize priming and framing effects are some of the larger challenges researchers may face. So, when using data from polls, it is important to read through the polling instrument or questionnaire with a critical eye. Critical analysis is essential in choosing data if the goal is to make generalizable inferences.

    In addition, one has to be cautious of social desirability effects (which is commonly referred to as the ‘Bradley Effect’ when being discussed by the media) and band-wagon effects. Ask yourself: Are there responses to questions that society deems to be ‘good’ or ‘right’? And if your answer is ‘yes’, then could questions about these topics yield improper results because the respondent either did not want to be embarrassed by answering ‘incorrectly’ or perhaps judged by the interviewer?

    The assumption behind any poll is that the respondents are telling you the truth, and we know people do not always tell the truth. For example, if you are contacted by a college student at a call center, could your opinion on a political topic like “student loan debt forgiveness” change because you are talking to someone who is in college? What if you found the caller charming? Even if the answer is ‘no’, not a lot of people have to be impacted by social desirability effects (Streb et al. 2008) to dramatically skew perception of reality, especially in a society that is evenly polarized.

    Band-wagon effects are similar. In this case, it’s not necessarily about going against society, but being influenced by the enthusiasm of those around you (Marsh 1985). Growing up as a Los Angeles Angels baseball fan, I intuitively already knew about the band-wagon effects before I even took my first graduate level seminar. Up until their world series victory, the Angels were the laughing stock of baseball. They also had a reputation for being choke artists (always failing to finish strong). They would be the type of organization where a fan may occasionally put a paper bag over their head. And then one miraculous year, they won the World Series. As the season progressed, there was quite the cornucopia of Angeles’ jerseys and halo hats that flowered throughout Southern California. Will these fans be back when the team starts struggling again the next year or year after? Likely not.

    Sometimes there is so much excitement (or anger) towards a candidate, it can spread infectiously. Afterall, have you ever liked or disliked someone and you didn’t know why? You just knew everyone around you (and perhaps people you trust dearly) felt a certain way and you somewhat adopted the same belief even though you may have not had enough time to research the candidate or subject yourself? You can’t really avoid these effects in questionnaire wording, but be conscious of these effects when studying the short and long term results of public opinion. Sometimes (but not all), public opinion can shift just as dramatically in one direction as the other as it can with band-wagon effects.

    When studying data from other countries and doing comparative analysis, political scientists have studied rally around the flag effects phenomenon (Baum 2002). These effects can be seen in a dramatic spike (which is often temporary) in public approval ratings nation leaders have during war time, or at-least the beginning of an armed conflict. The suddenness of the spikes could also mean the opinions are less stable. For example, President George H. Bush’s public approval was in the low nineties after Operation Desert Storm, yet he lost a presidential election less than two years later.

    President George W. Bush’s approval was in the low nineties after 9/11, but he only won his re-election by a little over two percent (popular vote). A sense of one’s identity as a citizen of a nation is activated, and you temporarily re-range your priorities as a response to an external threat (Hetherington and Nelson 2003). The same type of phenomenon can be activated around anyone one group identifies you may share. For example, have you ever been annoyed hearing someone bash your age group even though you were not the target of the criticism?

    The short-comings of public opinion polls should not dismay someone from using them as a research tool. There are many good researchers who have been trained in excellent techniques in minimizing potential problems. Polling and consulting firms are used by many schools and companies for data collection and marketing purposes. Companies and news outlets would not pay for results if there was not a strong track record in polling success. But, as polling has become more frequently used, many more companies and firms have been getting into the industry. Some businesses do not apply the same traditional standards and safeguards in their polls as academics do. So, it is always a good idea to look at the accuracy of past polls the company has produced when looking for reputable places to get data.