Skip to main content
Social Sci LibreTexts

11.6: Inferencing

  • Page ID
    • Wikipedia
    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Inferencing is used to build up complex situation models with limited information. For example: in 1973 John Bransford and Marcia Johnson made a memory experiment in which they had two groups reading variations of the same sentence.

    The first group read the text "John was trying to fix the bird house. He was pounding the nail when his father came out to watch him do the work"

    The second group read the text "John was trying to fix the bird house. He was looking for the nail when his father came out to watch him do the work"

    After reading, some test statements were presented to the participants. These statements contained the word hammer which did not occur in the original sentences, e.g.: "John was using a hammer to fix the birdhouse. He was looking for the nail when his father came out to watch him". Participants of the first group said they had seen 57% of the test statements, while the participants from the second group had seen only 20% of the test statements.

    As one can see, in the first group there is a tendency of believing to have seen the word hammer. The participants of this group made the inference, that John used a hammer to pound the nail. This memory influence test is good example to get an idea what is meant by making inferences and how they are used to complete situation models.

    While reading a text, inferencing creates information which is not explicitly stated in the text; hence it is a creative process. It is very important for text understanding in general, because texts cannot include all information needed to understand the sense of a story. Texts usually leave out what is known as world knowledge. World knowledge is knowledge about situations, persons or items that most people share, and therefore don't need to be explicitly stated. Each person should be able to infer this kind of information, as for example that we usually use hammers to pound nails. It would be impossible to write a text, if it had to include all information it deals with; if there was no such thing like inferencing or if it was not automatically done by our brain.

    There is a number of different kinds of inferences:

    Anaphoric Inference

    This kind of Inferencing usually connects objects or persons from one to another sentence. Therefore it is responsible for connecting cross-sentence information. E.g. in "John hit the nail. He was proud of his stroke", we directly infer that "he" and "his" relate to "John". We normally make this kind of inference quite easily. But there can be sentences where more persons and other words relating to them are mixed up and people have problems understanding the story at first. This is normally regarded as bad writing style.

    Instrumental Inference

    This type of Inference is about the tools and the methods used in the text, like the hammer in the example above. Or for example, if you read about somebody flying to New York, you would not infer that this person has built a dragon-flyer and jumped off a cliff but that he or she used a plane, since there is nothing else mentioned in the text and a plane is the most common form of flying to New York. If there is no specific information about tools, instruments and methods, we get this information from our General World Knowledge

    Causal Inference

    Causal Inference is the conclusion that one event caused another in the text, like in "He hit his nail. So his finger ached". The first sentence gives the reason why the situation described in the second sentence came to be. It would be more difficult to draw a causal inference in an example like "He hit his nail. So his father ran away", although one could create an inference on this with some fantasy.

    Causal inferences create causal connections between text elements. These connections are separated into local connections and global connections. Local connections are made within a range of 1 to 3 sentences. This depends on factors like the capacity of the working memory and the concentration due reading. Global connections are drawn between the information in one sentence together with the background information gathered so far about the whole text. Problems can occur with Causal Inferences when a story is inconsistent. For example, vegans eating steak would be inconsistent. An interesting fact about Causal Inferences (Goldstein, 2005) is that the kind of Inferences we draw here that are not easily seen at first are easier to remember. This may be due to the fact that they required a higher mental processing capacity while drawing the inference. So this "not-so-easy" inference seems to be marked in a way that it is easier to remember it.

    Predictive / Forward Inference

    Predictive/Forward Inferences uses the General World Knowledge of the reader to build his prediction of the consequences of what is currently happening in the story into the Situation Model.

    Integrating Inferences into Situation Models

    The question how models enter inferential processes is highly controversial in the two disciplines of cognitive psychology and artificial intelligence. A.I. gave a deep insight in psychological procedures and since the two disciplines crossed their ways and give two main bases of the cognitive science. The arguments in these are largely independent from each other although they have much in common.

    Johnson-Laird (1983) makes a distinction between three types of reasoning-theories in which inferencing plays an important role. The first class gears to logical calculi and have been implemented in many formal system. The programming language Prolog arises from this way of dealing with reasoning and in psychology many theories postulate formal rules of inference, a "mental logic." These rules work in a purely syntactic way and so are "context free," blind for the context of its content. A simple example clarifies the problem with this type of theory:

        If patients have cystitis, then they are given penicillin.

    and the logical conclusion:

        If patients have cystitis and are allergic to penicillin, then they are given penicillin

    This is logically correct, but seems to fail our common sense of logic.

    The second class of theories postulate content specific rules of inference. Their origin lies in programming languages and production systems. They work with forms like "If x is a, then x is b". If one wants to show that x is b, showing that x is a sub-goal of this argumentation. The idea of basing psychological theories of reasoning on content specific rules was discussed by Johnson-Laird and Wason and various sorts of such theories have been proposed. A related idea is that reasoning depends on the accumulation of specific examples within a connectionist framework, where the distinction between inference and recall is blurred.

    The third class of theories is based on mental models and does not use any rules of inferencing. The process of building mental models of things heard or read. The models are in an permanent change of updates. A model built, will be equipped with new features of the new information as long as there is no information, which generates a conflict with that model. If this is the case the model is generally re-built, so that the conflict generating information fits into the new model.

    This page titled 11.6: Inferencing is shared under a CC BY-SA 3.0 license and was authored, remixed, and/or curated by Wikipedia via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.