Skip to main content
Social Sci LibreTexts

14.2: Inductive Reasoning

  • Page ID
    17818
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Inductive reasoning (also called “induction”) is probably the form of reasoning we use on a more regular basis. Induction is sometimes referred to as “reasoning from example or specific instance,” and indeed, that is a good description. It could also be referred to as “bottom-up” thinking. Inductive reasoning is sometimes called “the scientific method,” although you don’t have to be a scientist to use it, and use of the word “scientific” gives the impression it is always right and always precise, which it is not. In fact, we are just as likely to use inductive logic incorrectly or vaguely as we are to use it well.

    Inductive reasoning happens when we look around at various happenings, objects, behavior, etc., and see patterns. From those patterns we develop conclusions. There are four types of inductive reasoning, based on different kinds of evidence and logical moves or jumps.

    Generalization

    Generalization is a form of inductive reasoning that draws conclusions based on recurring patterns or repeated observations. Vocabulary.com (2016) goes one step further to state it is “the process of formulating general concepts by abstracting common properties of instances.” To generalize, one must observe multiple instances and find common qualities or behaviors and then make a broad or universal statement about them. If every dog I see chases squirrels, then I would probably generalize that all dogs chase squirrels.

    If you go to a certain business and get bad service once, you may not like it. If you go back and get bad treatment again, you probably won’t go back again because you have concluded “Business X always treats its customers badly.” However, according to the laws of logic, you cannot really say that; you can only say, “In my experience, Business X treats its customers badly” or more precisely, “has treated me badly.” Additionally, the word “badly” is imprecise, so to be a valid conclusion to the generalization, badly should be replaced with “rudely,” “dishonestly,” or “dismissively.” The two problems with generalization are over-generalizing (making too big an inductive leap, or jump, from the evidence to the conclusion) and generalizing without enough examples (hasty generalization, also seen in stereotyping).

    In the example of the service at Business X, two examples are really not enough to conclude that “Business X treats customers rudely.” The conclusion does not pass the logic test for generalization, but pure logic may not influence whether or not you patronize the business again. Logic and personal choice overlap sometimes and separate sometimes. If the business is a restaurant, it could be that there is one particularly rude server at the restaurant, and he happened to wait on you during both of your experiences. It is possible that everyone else gets fantastic service, but your generalization was based on too small a sample.

    Inductive reasoning through generalization is used in surveys and polls. If a polling organization follows scientific sampling procedures (sample size, ensuring different types of people are involved, etc.), it can conclude that their poll indicates trends in public opinion. Inductive reasoning is also used in science. We will see from the examples below that inductive reasoning does not result in certainty. Inductive conclusions are always open to further evidence, but they are the best conclusions we have now.

    For example, if you are a coffee drinker, you might hear news reports at one time that coffee is bad for your health, and then six months later that another study shows coffee has positive effects on your health. Scientific studies are often repeated or conducted in different ways to obtain more and better evidence and make updated conclusions. Consequently, the way to disprove inductive reasoning is to provide contradictory evidence or examples.

    Causal reasoning

    Instead of looking for patterns the way generalization does, causal reasoning seeks to make cause-effect connections. Causal reasoning is a form of inductive reasoning we use all the time without even thinking about it. If the street is wet in the morning, you know that it rained based on past experience. Of course, there could be another cause—the city decided to wash the streets early that morning—but your first conclusion would be rain. Because causes and effects can be so multiple and complicated, two tests are used to judge whether the causal reasoning is valid.

    Good inductive causal reasoning meets the tests of directness and strength. The alleged cause must have a direct relationship on the effect and the cause must be strong enough to make the effect. If a student fails a test in a class that he studied for, he would need to examine the causes of the failure. He could look back over the experience and suggest the following reasons for the failure:

    1. He waited too long to study.
    2. He had incomplete notes.
    3. He didn’t read the textbook fully.
    4. He wore a red hoodie when he took the test.
    5. He ate pizza from Pizza Heaven the night before.
    6. He only slept four hours the night before.
    7. The instructor did not do a good job teaching the material.
    8. He sat in a different seat to take the test.
    9. His favorite football team lost its game on the weekend before.

    Which of these causes are direct enough and strong enough to affect his performance on the test? All of them might have had a slight effect on his emotional, physical, or mental state, but all are not strong enough to affect his knowledge of the material if he had studied sufficiently and had good notes to work from. Not having enough sleep could also affect his attention and processes more directly than, say, the pizza or football game. We often consider “causes” such as the color of the hoodie to be superstitions (“I had bad luck because a black cat crossed my path”).

    Taking a test while sitting in a different seat from the one where you sit in class has actually been researched (Sauffley, Otaka, & Bavaresco, 1985), as has whether sitting in the front or back affects learning (Benedict & Hoag, 2004). (In both cases, the evidence so far says that they do not have an impact, but more research will probably be done.) From the list above, #1-3, #6, and #7 probably have the most direct effect on the test failure. At this point our student would need to face the psychological concept of locus of control, or responsibility—was the failure on the test mostly his doing, or his instructor’s?

    Screen Shot 2019-09-08 at 7.21.41 PM.png

    Causal reasoning is susceptible to four fallacies: historical fallacy, slippery slope, false cause, and confusing correlation and causation. The first three will be discussed later, but the last is very common, and if you take a psychology or sociology course, you will study correlation and causation well. This video of a Ted Talk (https://www.youtube.com/watch?v=8B271L3NtAw) will explain the concept in an entertaining manner. Confusing correlation and causation is the same as confusing causal reasoning and sign reasoning, discussed below.

    Sign Reasoning

    Right now, as one of the authors is writing this chapter, the leaves on the trees are turning brown, the grass does not need to be cut every week, and geese are flying towards Florida. These are all signs of fall in this region. These signs do not make fall happen, and they don’t make the other signs— cooler temperatures, for example—happen. All the signs of fall are caused by one thing: the rotation of the earth and its tilt on its axis, which make shorter days, less sunshine, cooler temperatures, and less chlorophyll in the leaves, leading to red and brown colors.

    It is easy to confuse signs and causes. Sign reasoning, then, is a form of inductive reasoning in which conclusions are drawn about phenomena based on events that precede or co-exist with, but not cause, a subsequent event. Signs are like the correlation mentioned above under causal reasoning. If someone argues, “In the summer more people eat ice cream, and in the summer there is statistically more crime. Therefore, eating more ice cream causes more crime!” (or “more crime makes people eat more ice cream.”), that, of course, would be silly. These are two things that happen at the same time—signs—but they are effects of something else – hot weather. If we see one sign, we will see the other. Either way, they are signs or perhaps two different things that just happen to be occurring at the same time, but not causes.

    Analogical reasoning

    As mentioned above, analogical reasoning involves comparison. For it to be valid, the two things (schools, states, countries, businesses) must be truly alike in many important ways–essentially alike. Although Harvard University and your college are both institutions of higher education, they are not essentially alike in very many ways. They may have different missions, histories, governance, surrounding locations, sizes, clientele, stakeholders, funding sources, funding amounts, etc. So it would be foolish to argue, “Harvard has a law school; therefore, since we are both colleges, my college should have a law school, too.” On the other hand, there are colleges that are very similar to your college in all those ways, so comparisons could be valid in those cases.

    You have probably heard the phrase, “that is like comparing apples and oranges.” When you think about it, though, apples and oranges are more alike than they are different (they are both still fruit, after all). This observation points out the difficulty of analogical reasoning—how similar do the two “things” have to be for there to be a valid analogy? Second, what is the purpose of the analogy? Is it to prove that State College A has a specific program (sports, Greek societies, a theater major), therefore, College B should have that program, too? Are there other factors to consider? Analogical reasoning is one of the less reliable forms of logic, although it is used frequently.

    To summarize, inductive or bottom-up reasoning comes in four varieties, each capable of being used correctly or incorrectly. Remember that inductive reasoning is disproven by counter evidence and its conclusions are always up to revision by new evidence–what is called “tentative,” because the conclusions might have to be revised. Also, the conclusions of inductive reasoning should be precisely stated to reflect the evidence.


    This page titled 14.2: Inductive Reasoning is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Kris Barton & Barbara G. Tucker (GALILEO Open Learning Materials) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.