Skip to main content
Social Sci LibreTexts

3.16: Reverse Engineering

  • Page ID
    35727
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Methodologically speaking, what is classical cognitive science? The goal of classical cognitive science is to explain an agent’s cognitive abilities. Given an intact, fully functioning cognitive agent, the classical cognitive scientist must construct a theory of the agent’s internal processes. The working hypothesis is that this theory will take the form of a physical symbol system. Fleshing this hypothesis out will involve proposing a theory, and hopefully a working computer simulation, that will make explicit proposals about the agent’s symbol structures, primitive processes, and system of control.

    Given this scenario, a classical cognitive scientist will almost inevitably engage in some form of reverse engineering.

    In reverse engineering, one figures out what a machine was designed to do. Reverse engineering is what the boffins at Sony do when a new product is announced by Panasonic, or vice versa. They buy one, bring it back to the lab, take a screwdriver to it, and try to figure out what all the parts are for and how they combine to make the device work. (Pinker, 1997, p. 21)

    The reverse engineering conducted by a classical cognitive science is complicated by the fact that one can’t simply take cognitive agents apart with a screwdriver to learn about their design. However, the assumption that the agent is a physical symbol system provides solid guidance and an effective methodology.

    The methodology employed by classical cognitive science is called functional analysis (Cummins, 1975, 1983). Functional analysis is a top-down form of reverse engineering that maps nicely onto the multiple levels of investigation that were introduced in Chapter 2.

    Functional analysis begins by choosing and defining a function of interest to explain. Defining a function of interest entails an investigation at the computational level. What problem is being solved? Why do we say this problem is being solved and not some other? What constraining properties can be assumed to aid the solution to the problem? For instance, we saw earlier that a computational theory of language learning (identifying a grammar in the limit) might be used to motivate possible properties that must be true of a language or a language learner.

    The next step in a functional analysis is to decompose the function of interest into a set of subcomponents that has three key properties. First, each subcomponent is defined functionally, not physically. Second, each subcomponent is simpler than the original function. Third, the organization of the subcomponents—the flow of information from one component to another—is capable of producing the inputoutput behaviour of the original function of interest. “Functional analysis consists in analyzing a disposition into a number of less problematic dispositions such that the programmed manifestation of these analyzing dispositions amounts to a manifestation of the analyzed disposition” (Cummins, 1983, p. 28). These properties permit the functional analysis to proceed in such a way that Ryle’s regress will be avoided, and that eventually the homunculi produced by the analysis (i.e., the functional subcomponents) can be discharged, as was discussed in Chapter 2.

    The analytic stage of a functional analysis belongs to the algorithmic level of analysis. This is because the organized system of subfunctions produced at this stage is identical to a program or algorithm for producing the overall input-output behaviour of the agent. However, the internal cognitive processes employed by the agent cannot be directly observed. What methods can be used to carve up the agent’s behaviour into an organized set of functions? In other words, how can observations of behaviour support decisions about functional decomposition?

    The answer to this question reveals why the analytic stage belongs to the algorithmic level of analysis. It is because the empirical methods of cognitive psychology are designed to motivate and validate functional decompositions.

    For example, consider the invention that has become known as the modal model of memory (Baddeley, 1986), which was one of the triumphs of cognitivism in the 1960s (Shiffrin & Atkinson, 1969; Waugh & Norman, 1965). According to this model, to-be-remembered information is initially kept in primary memory, which has a small capacity and short duration, and codes items acoustically. Without additional processing, items will quickly decay from primary memory. However, maintenance rehearsal, in which an item from memory is spoken aloud and thus fed back to the memory in renewed form, will prevent this decay. With additional processing like maintenance rehearsal, some of the items in primary memory pass into secondary memory, which has large capacity and long duration, and employs a semantic code.

    The modal memory model was inspired and supported by experimental data. In a standard free-recall experiment, subjects are asked to remember the items from a presented list (Glanzer & Cunitz, 1966; Postman & Phillips, 1965). The first few items presented are better remembered than the items presented in the middle— the primacy effect. Also, the last few items presented are better remembered than the middle items—the recency effect. Further experiments demonstrated a functional dissociation between the primacy and recency effects: variables that influenced one effect left the other unaffected. For example, introducing a delay before subjects recalled the list eliminated the recency effect but not the primacy effect (Glanzer & Cunitz, 1966). If a list was presented very quickly, or was constructed from low-frequency words, the primacy effect—but not the recency effect—vanished (Glanzer, 1972). To explain such functional dissociation, researchers assumed an organized system of submemories (the modal model), each with different properties.

    The analytic stage of a functional analysis is iterative. That is, one can take any of the subfunctions that have resulted from one stage of analysis and decompose it into an organized system of even simpler sub-subfunctions. For instance, as experimental techniques were refined, the 1960s notion of primary memory has been decomposed into an organized set of subfunctions that together produce what is called working memory (Baddeley, 1986, 1990). Working memory is decomposed into three basic subfunctions. The central executive is responsible for operating on symbols stored in buffers, as well as for determining how attention will be allocated across simultaneously ongoing tasks. The visuospatial buffer stores visual information. The phonological loop is used to store verbal (or speech-like) information. The phonological loop has been further decomposed into subfunctions. One is a phonological store that acts as a memory by holding symbols. The other is a rehearsal process that preserves items in the phonological store.

    We saw in Chapter 2 that functional decomposition cannot proceed indefinitely if the analysis is to serve as a scientific explanation. Some principles must be applied to stop the decomposition in order to exit Ryle’s regress. For Cummins’ (1983) functional analysis, this occurs with a final stage—causal subsumption. To causally subsume a function is to explain how physical mechanisms bring the function into being. “A functional analysis is complete when the program specifying it is explicable via instantiation—i.e., when we can show how the program is executed by the system whose capacities are being explained” (p. 35). Cummins called seeking such explanations of functions the subsumption strategy. Clearly the subsumption strategy is part of an architectural level of investigation, employing evidence involving cognitive impenetrability and modularity. It also leans heavily on evidence gathered from an implementational investigation (i.e., neuroscience).

    From a methodological perspective, classical cognitive science performs reverse engineering, in the form of functional analysis, to develop a theory (and likely a simulation) of cognitive processing. This enterprise involves both formal and empirical methods as well as the multiple levels of investigation described in Chapter 2. At the same time, classical cognitive science will also be involved in collecting data to establish the strong equivalence between the theory and the agent by establishing links between the two at the different levels of analysis, as we have been discussing in the preceding pages of the current chapter.


    This page titled 3.16: Reverse Engineering is shared under a CC BY-NC-ND license and was authored, remixed, and/or curated by Michael R. W. Dawson (Athabasca University Press) .

    • Was this article helpful?