Skip to main content
Social Sci LibreTexts

14.4: Logical Fallacies

  • Page ID
    17820
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    The second part of achieving a logical speech is to avoid logical fallacies. Logical fallacies are mistakes in reasoning–getting one of the formulas, inductive or deductive, wrong. There are actually dozens upon dozens of fallacies, some of which have complicated Latin names. This chapter will deal with eighteen of the most common ones that you should know to avoid poor logic in your speech and to become a critical thinker.

    False Analogy

    A false analogy is a fallacy where two things are compared that do not share enough key similarities to be compared fairly. As mentioned before, for analogical reasoning to be valid, the two things being compared must be essentially similar—similar in all the important ways. Two states could be analogous, if they are in the same region, have similar demographics and histories, similar size, and other aspects in common. Georgia is more like Alabama than it is like Hawaii, although both are states. An analogy between the United States and, for example, a tiny European country with a homogeneous population is probably not a valid analogy, although common. Even in the case where the two “things” being compared are similar, you should be careful to support your argument with other evidence.

    False Cause

    False cause is a fallacy that assumes that one thing causes another, but there is no logical connection between the two. A cause must be direct and strong enough, not just before or somewhat related to cause the problem. In a false cause fallacy, the alleged cause might not be strong or direct enough. For example, there has been much debate over the causes of the recession in 2008. If someone said, “The exorbitant salaries paid to professional athletes contributed to the recession” that would be the fallacy of false cause. Why? For one thing, the salaries, though large, are an infinitesimal part of the whole economy. Second, those salaries only affect a small number of people. Third, those salaries have nothing to do with housing market or the management of the large car companies, banks, or Wall Street, which had a stronger and more direct effect on the economy as a whole. In general, while we are often tempted to attribute a large societal or historical outcome to just one cause, that is rarely the case in real life.

    Slippery Slope

    A slippery slope fallacy is a type of false cause which assumes that taking a first step will lead to subsequent events that cannot be prevented. The children’s book, If You Give a Moose a Muffin is a good example of slippery slope; it tells all the terrible things (from a child’s point of view) that will happen, one after another, if a moose is given a muffin. If A happens, then B will happen, then C, then D, then E, F, G and it will get worse and worse and before you know it, we will all be in some sort of ruin. So, don’t do A or don’t let A happen, because it will inevitably lead to Z, and of course, Z is terrible.

    This type of reasoning fails to look at alternate causes or factors that could keep the worst from happening, and often is somewhat silly when A is linked right to Z. A young woman may say to a young man asking her out, “If I go out with you Thursday night, I won’t be able to study for my test Friday. Then I will fail the test. Then I will fail the class. Then I will lose my scholarship. Then I will have to drop out of college. Then I will not get the career I want, and I’ll be 30 years old still living with my parents, unmarried, unhappy, and no children or career! That’s why I just can’t go out with you!” Obviously, this young woman has gone out of her way to get out of this date, and she has committed a slippery slope.f Additionally, since no one can predict the future, we can never be entirely certain on the direction a given chain of events will lead.

    Slippery slope arguments are often used in discussions over emotional and hot button topics such as gun control and physician-assisted suicide. One might argue that “If guns are outlawed, only outlaws will have guns,” a bumper sticker you may have seen. This is an example of a slippery slope argument because it is saying that any gun control laws will inevitably lead to no guns being allowed at all in the U.S. and then the inevitable result that only criminals will have guns because they don’t obey gun control laws anyway. While it is true criminals do not care about gun laws, we already have a large number of gun laws and the level of gun ownership is as high as ever.

    However, just because an argument is criticized as a slippery slope, that does not mean it is a slippery slope. Sometimes actions do lead to far-reaching but unforeseen events, according to the “law of unintended consequences.” We should look below the surface to see if the accusation of slippery slope is true.

    For example, in regard to the anti-gun control “bumper sticker,” an investigation of the facts will show that gun control laws have been ineffective in many ways since we have more guns than ever now (347 million, according to a website affiliated with the National Rifle Association). However, according to the Brookings Institution, there are

    “‘…about 300 major state and federal laws, and an unknown but shrinking number of local laws’. . . . Rather than trying to base arguments for more or fewer laws on counting up the current total, we would do better to study the impact of the laws we do have.” (Vernick & Hepburn, 2003, p. 2).

    Note that in the previous paragraph, two numerical figures are used, both from sources that are not free of bias. The National Rifle Association obviously opposes gun restrictions and does not support the idea that there are too many guns. Their website gives the background to show how that figure was discovered. The Brookings Institution is a “think-tank” (a group of scholars who write about public issues) that advocates gun control. Their article explains how it came to its number of state and federal laws, but admits that it omitted many local laws about carrying or firing guns in public places. So the number is actually higher, by its own admission. The Brookings Institution does not think there are too many laws; it thinks there should be more, or at least better enforced ones. Also, it should be noted that this article is based on data from 1970-1999, so the information may be out of date.

    This information about the sources is provided to make a point about possible bias in sources and about critical thinking and reading, or more specifically, reading carefully to understand your sources. Just finding a source that looks pretty good is not enough. You must ask important questions about the way the information is presented. An interesting addition the debate is found at www.rand.org/research/gun-policy/essays/ what-science-tells-us-about-the-effects-of-gun-policies.html Although most people have strong opinions about gun control, pro and con, it is a complicated debate that requires, like most societal issues, clear and critical thinking about the evidence.

    Hasty Generalization

    Making a hasty generalization means making a generalization with too few examples. It is so common that we might wonder if there are any legitimate generalizations. The key to generalizations is how the conclusions are “framed” or put into language. The conclusions should be specific and be clear about the limited nature of the sample. Even worse is when the generalization is also applied too hastily to other situations. For example:

    Premise: Albert Einstein did poorly in math in school.

    Conclusion: All world-renowned scientists do poorly in math in school.

    Secondary Conclusion: I did poorly at math in school, so I will become a world-renowned scientist.

    Or this example that college professors hear all the time.

    Premise: Mark Zuckerberg dropped out of college, invented Facebook, and made billions of dollars.

    Premise: Bill Gates dropped out of college, started Microsoft, and made billions of dollars.

    Conclusion: Dropping out of college leads to great financial success.

    Secondary conclusion: A college degree is unnecessary to great financial success.

    Straw Man

    A straw man fallacy is a fallacy that shows only the weaker side of an opponent’s argument in order to more easily tear it down. The term “straw man” brings up the image of a scarecrow, and that is the idea behind the expression. Even a child can beat up a scarecrow; anyone can. Straw man fallacy happens when an opponent in a debate misinterprets or takes a small part of their opponent’s position in a debate. Then they blow that misinterpretation or small part out of proportion and make it a major part of the opponent’s position. This is often done by ridicule, taking statements out of context, or misquoting.

    Screen Shot 2019-09-08 at 8.15.20 PM.png

    Politicians, unfortunately, commit the straw man fallacy quite frequently, but they are hardly the only ones. Someone may argue that college professors don’t care about students’ learning because professors say, “You must read the chapter to understand the material; I can’t explain it all to you in class.” That would be taking a behavior and making it mean something it doesn’t. If someone states, “College A is not as good as College B because the cafeteria food at College A is not as good” is a pretty weak argument— and making too big of a deal over of a minor thing—for attending one college over another.

    Post hoc ergo propter hoc

    This long Latin phrase means “After the fact, therefore because of the fact.” Also called historical fallacy, this one is an error in causal reasoning. Historical fallacy uses progression in time as the reason for causation, but nothing else. In this scenario, A happens, then B happens; therefore A caused B. The fallacy states that because an event takes place first in time, it is the cause of an event that takes place later in time. We know that is not true, but sometimes we act as if it is.

    Elections often get blamed for everything that happens afterward. It is true that a cause must happen first or before the effect, but it doesn’t mean that everything or anything that happens beforehand must be the cause. In the example given earlier, a football team losing its game five days earlier can’t be the reason for a student failing a test just because it happened first.

    Argument from Silence

    You can’t prove something from nothing. If the constitution, legal system, authority, or the evidence is silent on a matter, then that is all you know. You cannot conclude anything about that. “I know ESP is true because no one has ever proven that it isn’t true” is not an argument. Here we see the difference between fallacious and false. Fallacious has to do with the reasoning process being incorrect, not with the truth or falseness of the conclusion. If I point to a girl on campus and say, “That girl is Taylor Swift,” I am simply stating a falsehood, not committing a fallacy. If I say, “Her name is Taylor Swift, and the reason I know that is because no one has ever told me that her name is not Taylor Swift” (argument from silence), that is a fallacy and a falsehood. (Unless by some odd circumstance her name really is Taylor Swift or the singer Taylor Swift frequents your campus!)

    Statistical fallacies

    There are many ways that statistics can be used unethically, but here we will deal with three. The first type of statistical fallacy is “small sample,” the second is “unrepresentative sample,” and the third is a variation of appeal to popularity (discussed below). In small sample, an argument is being made from too few examples, so it is essentially hasty generalization. In unrepresentative sample, a conclusion is based on surveys of people who do not represent, or resemble, the ones to whom the conclusion is being applied. If you ever take a poll on a website, it is not “scientific” because it is unrepresentative. Only people who go to that website are participating, and the same people could be voting over and over. In a scientific or representative survey or poll, the pollsters talk to different socio-economic classes, races, ages, and genders and the data-gathering is very carefully performed.

    If you go to the president of your college and say, “We need to have a daycare here because 90% of the students say so,” but you only polled ten students, that would be small sample. If you say, “I polled 100 students,” that would still be small, but better, unless all of them were your friends who attended other colleges in the state. That group would not be representative of the student body. If you polled 300 students but they were all members of the same high school graduating class and the same gender as you, that would also be unrepresentative sample.

    In the end, surveys indicate trends in opinions and behaviors, not the future and not the truth. We have lots of polls before the election, but only one poll matters—the official vote on Election Day.

    Non Sequitur

    Non sequitur is Latin for “it does not follow.” It’s an all-purpose fallacy for situations where the conclusion sounds good at first but then you realize there is no connection between the premises and the conclusion. If you say to your supervisor, “I need a raise because the price of BMWs went up,” that is a non sequitur.

    Inappropriate Appeal to Authority

    There are appropriate appeals to authority, such as when you use sources in your speech who are knowledgeable, experienced, and credible. But not all sources are credible. Some may be knowledgeable about one field but not another. A person with a Nobel Prize in economics is not qualified to talk about medicine, no matter how smart he/she is (the economist could talk about the economic factors of medicine, however). Of course, the most common place we see this is in celebrity endorsements on commercials.

    False Dilemma

    This one is often referred to as the “either-or” fallacy. When you are given only two options, and more than two options exist, that is false dilemma. Usually in false dilemma, one of the options is undesirable and the other is the one the persuader wants you to take. False dilemma is common. “America: Love it or Leave It.” “If you don’t buy this furniture today, you’ll never get another chance.” “Vote for Candidate Y or see our nation destroyed.”

    Appeal to Tradition

    Essentially, appeal to tradition is the argument, “We’ve always done it this way.” This fallacy happens when traditional practice is the only reason for continuing a policy. Tradition is a great thing. We do many wonderful things for the sake of tradition, and it makes us feel good. But doing something only because it’s always been done a certain way is not an argument. Does it work? Is it cost effective? Is some other approach better? If your college library refused to adopt a computer database of books in favor of the old card catalog because “that’s what libraries have done for decades,” you would likely argue they need to get with the times. The same would be true if the classrooms all still had only chalkboards instead of computers and projectors and the administration argued that it fit the tradition of classrooms.

    Bandwagon

    This fallacy is also referred to as “appeal to majority” and “appeal to popularity,” using the old expression of “get on the bandwagon” to support an idea. Essentially, bandwagon is a fallacy that asserts that because something is popular (or seems to be), it is therefore good, correct, or desirable. In a sense it was mentioned before, under statistical fallacies. Of course, you’ve probably heard it or said it many times: “Everybody is doing it.” Well, of course, everybody is not doing it, it just seems like it. And the fact (or perception) that more than 50% of the population is engaging in an activity does not make that a wise activity.

    Many times in history over 50% of the population believed or did something that was not good or right, such as believing the earth was the center of the solar system and the sun orbited around the earth. In a democracy we make public policy to some extent based on majority rule, but we also have protections for the minority. This is a wonderful part of our system. It is sometimes foolish to say that a policy is morally right or wrong or wise just because it is supported by 50% of the people. So when you hear a public opinion poll that says, “58% of the population thinks…” keep this in mind. Also, all it means is that 58% of the people on a survey indicated a belief or attitude on a survey, not that the belief or attitude is correct or that it will be the majority opinion in the future.

    Red Herring

    This one has an interesting history, and you might want to look it up. A herring is a fish, and it was once used to throw off or distract foxhounds from a particular scent. A red herring, then, is creating a diversion or introducing an irrelevant point to distract someone or get someone off the subject of the argument. When a politician in a debate is asked about his stance on immigration, and the candidate responds, “I think we need to focus on reducing the debt. That’s the real problem!”, he is introducing a red herring to distract from the original topic under discussion. If someone argues, “We should not worry about the needs of people in other countries because we have poor people in the United States,” that may sound good on the surface, but it is a red herring and a false dilemma (either-or) fallacy. It is possible to address poverty in this country and other countries at the same time.

    Ad Hominem

    This Latin term means “argument to the man,” and generally refers to a fallacy that attacks the person rather than dealing with the real issue in dispute. A person using ad hominem connects a real or perceived flaw in a person’s character or behavior to an issue he or she supports, asserting that the flaw in character makes the position on the issue wrong. Obviously, there is no connection. In a sense, ad hominem is a type of red herring because it distracts from the real argument. In some cases, the “hidden agenda” is to say that because someone of bad character supports an issue or argument, therefore the issue or argument is not worthy or logical.

    A person using ad hominem might say, “Climate change is not true. It is supported by advocates such as Congressman Jones, and we all know that Congressman Jones was convicted of fraud last year.” This is not to say that Congressman Jones should be re-elected, only that climate change’s being true or false is irrelevant to the fraud conviction. Do not confuse ad hominem with poor credibility or ethos. A speaker’s ethos, based on character or past behavior, does matter. It just doesn’t mean that the issues they support are logically or factually wrong.

    Ad Misericordium

    This Latin term means “appeal to pity” and sometimes that term is used instead of the Latin one. There is nothing wrong with pity and human compassion as an emotional appeal in a persuasive speech; in fact, that is definitely one you might want to use if it is appropriate, such as to solicit donations to a worthwhile charity. However, if the appeal to pity is used to elicit an emotional appeal and cover up a lack of facts and evidence, it is being used as a smokescreen and is deceiving the audience. If a nonprofit organization tried to get donations by wrenching your heartstrings, that emotion may divert your attention from how much of the donation really goes to the “cause.” Chapter 3 of this book looked at ethics in public speaking, and intentional use of logical fallacies is a breach of ethics, even if the audience accepts them and does not use critical thinking on its own.

    Plain Folks

    Plain folks is a tactic commonly used in advertising and by politicians. Powerful persons will often try to make themselves appear like the “common man.” A man running for Senate may walk around in a campaign ad in a flannel shirt, looking at his farm. (Flannel shirts are popular for politicians, especially in the South.) A businessman of a large corporation may want you to think his company cares about the “little guy” by showing the owner helping on the assembly line. The image that these situations create says, “I’m one of the guys, just like you.” There is nothing wrong with wearing a flannel shirt and looking at one’s farm, unless the reason is to divert from the real issues.

    Guilt by Association

    This fallacy is a form of false analogy based on the idea that if two things bear any relationship at all, they are comparable. No one wants to be blamed for something just because she is in the wrong place at the wrong time or happens to bear some resemblance to a guilty person. An example would be if someone argued, “Adolf Hitler was a vegetarian; therefore being a vegetarian is evil.” Of course, vegetarianism as a life practice had nothing to do with Hitler’s character. Although this is an extreme example, it is not uncommon to hear guilt by association used as a type of ad hominem argument. There is actually a fallacy called “reductio ad Hitlerum”— whenever someone dismisses an argument by bringing up Hitler out of nowhere.

    There are other fallacies, many of which go by Latin names. You can visit other websites, such as http://www.logicalfallacies.info/ for more typesand examples. These eighteen are a good start to helping you discern good reasoning and supplement your critical thinking knowledge and ability.

    Conclusion

    This chapter took the subject of public speaking to a different level in that it was somewhat more abstract than the other chapters. However, a public speaker is responsible for using good reasoning as much as she is responsible to have an organized speech, to analyze the audience, or to practice for effective delivery.

    Something to Think About

    You cannot hear logical fallacies unless you listen carefully and critically. Keep your ears open to possible uses of fallacies. Are they used in discussion of emotional topics? Are they used to get compliance (such as to buy a product) without allowing the consumer to think about the issues? What else do you notice about them?

    Here is a class activity one of the authors has used in the past to teach fallacies. With a small group of classmates, create a “fallacy skit” to perform for the class. Plan and act out a situation where a fallacy is being used, and then be able to explain it to the class. The example under Slippery Slope about the young woman turning down a date actually came from the author’s students in a fallacy skit.


    This page titled 14.4: Logical Fallacies is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Kris Barton & Barbara G. Tucker (GALILEO Open Learning Materials) via source content that was edited to the style and standards of the LibreTexts platform.