Skip to main content
Social Sci LibreTexts

1.3: Connection to Instance Theory

  • Page ID
    129487
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Instance theories of cognition have been developed to account for a range of cognitive abilities, from learning (Jamieson, Crump, and Hannah 2012) and memory (D. L. Hintzman 1988; D. L. Hintzman 1984; D. Hintzman 1986), skill-acquisition (Logan 1988), categorization and concept formation (Jacoby and Brooks 1984), to judgment and decision-making (Dougherty, Gettys, and Ogden 1999). A common assumption among instance theories is that single experiences are a foundational unit of knowledge representation: People retain the details of their specific experiences. In some computational models, such as MINERVA (D. Hintzman 1986), this idea is expressed in terms of a multiple-trace architecture. Every experience is represented as a feature vector coding the “gory detail” of the elemental features of that experience. These experiences are laid down in an instance-based memory, that has an ever expanding body of unique traces for each successive new experience. The notion that people have instance-based memory representation has been met with some skepticism, perhaps due to the incredulous idea that brains could have enough storage space to hold such a large repository of experiences: Instance theory must be wrong because you would receive a “hard-drive is too full message”. Regardless of the limits of brain-space, the more interesting implication from instance-theory is what can be done with a large pool of examples.

    In Jacoby & Brooks’ (1984) view, a large pool of examples can provide a non-analytic basis for cognition. When people have a large number of examples, they can perform classification tasks on the basis of analogy by similarity. When presented with a face, rather than pulling up a feature-list for faces and inspecting each element in the stimulus to determine whether the rules say whether it can be called a face, people can look at face, find that they can spontaneously  recall many similar prior examples of this stimulus, then call it a face because it looks globally similar to other faces they have seen.

    The essential ingredients for instance theory are 1) a large pool of examples, and 2) sensitivity to similarity. In computational models of instance theory, like most global matching memory models (Eich 1982; D. L. Hintzman 1984; Humphreys et al. 1989; Murdock 1993), both of these ingredients are present. The models store items, experiences, or examples, as high-dimensional feature vectors in a memory. The models also assume that retrieval from memory is done by cue-driven similarity. Specifically, the cues or features present in the current environment are represented as a high-dimensional feature vector, and this cue-of-the-present-moment (probe) is used to retrieve similar traces from memory. This can be achieved mathematically by computing the correlation or cosine between the features of the probe, and the features of each trace in memory. Then, the model assumes that people might respond to the probe in the same way that they responded to the traces retrieved by memory.

    Finally, instance theories work well when there is structure in the data. Structure refers to the fact that the features of our experiences are not random. They are correlated with themselves over time and space. If our experiences were random, our world would look and sound like the white noise from old TVs not capable of receiving a station. The insight from instance theory is that sensitivity to the structure of the world around us can be obtained by a reconstructive memory process capable of preserving the details of of our experiences. The experiences contain the structure, and we use similairity between the experiences to become sensitive to that structure. Reasoning by analogy, we see this idea being validated by the many successes of machine learning techniques applied to previously hard classification problems. Those solutions were obtained by harvesting enough examples to support accurate classifier generalization to new exemplars.

    Procedures of Mind

    Our review of instance theories also anticipated some of the lack-luster findings for cognitive enhancing technologies, like brain-training. The work of Kolers was particularly insightful (Kolers and Roediger 1984). Among other things Kolers spilled a great deal of ink on the topic of learning to read upside down (and other geometric rotations of text). Unsurprisingly, he found that people are of course worse at reading weird rotated text compared to normal upright text, and that people can learn to get better at reading these unfamiliar rotations. More important, were his findings about what it was that people had learned: the details. For example, people learned about the very specific thing they were practicing. A subject learning to read upside-down text would not get better in general for reading just any upside down text, instead they get better at reading the specific letters, words, and sentences contained in the examples they were learning. In other words, there wasn’t much far transfer to be had. In Koler’s view, people were learning “procedures of mind” for solving the specific pattern-analyzing problems they were confronted in the training examples. Here, specificity is the rule and generalization can the exception. Generalization can occur when the specific procedures applied to one problem happen to be useful for another. Taking a broader view, it appears that the specificity and lack of far transfer associated with learning new skills is a kind of hegemonic principle. For example, brain-training games train the game in specific, not the brain in general (Simons et al. 2016). Luminosity should have read Kolers.


    This page titled 1.3: Connection to Instance Theory is shared under a not declared license and was authored, remixed, and/or curated by Matthew J. C. Crump via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

    • Was this article helpful?