Skip to main content
Social Sci LibreTexts

2.13: Levelling the Field

  • Page ID
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    The logic machines that arose late in the nineteenth century, and the twentieth-century general-purpose computers that they evolved into, are examples of information processing devices. It has been argued in this chapter that in order to explain such devices, four different vocabularies must be employed, each of which is used to answer a different kind of question. At the computational level, we ask what information processing problem is being solved by the device. At the algorithmic level, we ask what procedure or program is being used to solve this problem. At the architectural level, we ask from what primitive information capabilities is the algorithm composed. At the implementational level, we ask what physical properties are responsible for instantiating the components of the architecture.

    As we progress from the computational question through questions about algorithm, architecture, and implementation we are moving in a direction that takes us from the very abstract to the more concrete. From this perspective each of these questions defines a different level of analysis, where the notion of level is to be taken as “level of abstractness.” The main theme of this chapter, then, is that to fully explain an information processing device one must explain it at four different levels of analysis.

    The theme that I’ve developed in this chapter is an elaboration of an approach with a long history in cognitive science that has been championed in particular by Pylyshyn (1984) and Marr (1982). This historical approach, called the tri-level hypothesis (Dawson, 1998), is used to explain information devices by performing analyses at three different levels: computational, algorithmic, and implementational. The approach that has been developed in this chapter agrees with this view, but adds to it an additional level of analysis: the architectural. We will see throughout this book that an information processing architecture has properties that separate it from both algorithm and implementation, and that treating it as an independent level is advantageous.

    The view that information processing devices must be explained by multiple levels of analysis has important consequences for cognitive science, because the general view in cognitive science is that cognition is also the result of information processing. This implies that a full explanation of human or animal cognition also requires multiple levels of analysis.

    Not surprisingly, it is easy to find evidence of all levels of investigation being explored as cognitive scientists probe a variety of phenomena. For example, consider how classical cognitive scientists explore the general phenomenon of human memory.

    At the computational level, researchers interested in the formal characterization of cognitive processes (such as those who study cognitive informatics [Wang, 2003, 2007]), provide abstract descriptions of what it means to memorize, including attempts to mathematically characterize the capacity of human memory (Lopez, Nunez, & Pelayo, 2007; Wang, 2009; Wang, Liu, & Wang, 2003).

    At the algorithmic level of investigation, the performance of human subjects in a wide variety of memory experiments has been used to reverse engineer “memory” into an organized system of more specialized functions (Baddeley, 1990) including working memory (Baddeley, 1986, 2003), declarative and nondeclarative memory (Squire, 1992), semantic and episodic memory (Tulving, 1983), and verbal and imagery stores (Paivio, 1971, 1986). For instance, the behaviour of the serial position curve obtained in free recall experiments under different experimental conditions was used to pioneer cognitive psychology’s proposal of the modal memory model, in which memory was divided into a limited-capacity, short-term store and a much larger-capacity, long-term store (Waugh & Norman, 1965). The algorithmic level is also the focus of the art of memory (Yates, 1966), in which individuals are taught mnemonic techniques to improve their ability to remember (Lorayne, 1998, 2007; Lorayne & Lucas, 1974).

    That memory can be reverse engineered into an organized system of subfunctions leads cognitive scientists to determine the architecture of memory. For instance, what kinds of encodings are used in each memory system, and what primitive processes are used to manipulate stored information? Richard Conrad’s (1964a, 1964b) famous studies of confusion in short-term memory indicated that it represented information using an acoustic code. One of the most controversial topics in classical cognitive science, the “imagery debate,” concerns whether the primitive form of spatial information is imagery, or whether images are constructed from more primitive propositional codes (Anderson, 1978; Block, 1981; Kosslyn, Thompson, & Ganis, 2006; Pylyshyn, 1973, 1981a, 2003b).

    Even though classical cognitive science is functionalist in nature and (in the eyes of its critics) shies away from biology, it also appeals to implementational evidence in its study of memory. The memory deficits revealed in patient Henry Molaison after his hippocampus was surgically removed to treat his epilepsy (Scoville & Milner, 1957) provided pioneering biological support for the functional separations of short-term from long-term memory and of declarative memory from nondeclarative memory. Modern advances in cognitive neuroscience have provided firm biological foundations for elaborate functional decompositions of memory (Cabeza & Nyberg, 2000; Poldrack et al., 2001; Squire, 1987, 2004). Similar evidence has been brought to bear on the imagery debate as well (Kosslyn, 1994; Kosslyn et al., 1995; Kosslyn et al., 1999; Kosslyn, Thompson, & Alpert, 1997).

    In the paragraphs above I have taken one tradition in cognitive science (the classical) and shown that its study of one phenomenon (human memory) reflects the use of all of the levels of investigation that have been the topic of the current chapter. However, the position that cognitive explanations require multiple levels of analysis (e.g., Marr, 1982) has not gone unchallenged. Some researchers have suggested that this process is not completely appropriate for explaining cognition or intelligence in biological agents (Churchland, Koch, & Sejnowski 1990; Churchland & Sejnowski, 1992).

    For instance, Churchland, Koch, & Sejnowski (1990, p. 52) observed that “when we measure Marr’s three levels of analysis against levels of organization in the nervous system, the fit is poor and confusing.” This observation is based on the fact that there appear to be a great many different spatial levels of organization in the brain, which suggests to Churchland, Koch, & Sejnowski that there must be many different implementational levels, which implies in turn that there must be many different algorithmic levels.

    The problem with this argument is that it confuses ontology with epistemology. That is, Churchland, Koch, & Sejnowski (1990) seemed to be arguing that Marr’s levels are accounts of the way nature is—that information processing devices are literally organized into the three different levels. Thus when a system appears to exhibit, say, multiple levels of physical organization, this brings Marr-as-ontology into question. However, Marr’s levels do not attempt to explain the nature of devices, but instead provide an epistemology—a way to inquire about the nature of the world. From this perspective, a system that has multiple levels of physical organization would not challenge Marr, because Marr and his followers would be comfortable applying their approach to the system at each of its levels of physical organization.

    Other developments in cognitive science provide deeper challenges to the multiple-levels approach. As has been outlined in this chapter, the notion of multiple levels of explanation in cognitive science is directly linked to two key ideas: 1) that information processing devices invite and require this type of explanation, and 2) that cognition is a prototypical example of information processing. Recent developments in cognitive science represent challenges to these key ideas. For instance, embodied cognitive science takes the position that cognition is not information processing of the sort that involves the rule-governed manipulation of mentally represented worlds; it is instead the control of action on the world (Chemero, 2009; Clark, 1997, 1999; Noë, 2004, 2009; Robbins & Aydede, 2009). Does the multiplelevels approach apply if the role of cognition is radically reconstrued?

    Churchland, Koch, & Sejnowski. (1990, p. 52) suggested that “[‘]which really are the levels relevant to explanation in the nervous system[’] is an empirical, not an a priori, question.” One of the themes of the current book is to take this suggestion to heart by seeing how well the same multiple levels of investigation can be applied to the three major perspectives in modern cognitive science: classical, connectionist, and embodied. In the next three chapters, I begin this pursuit by using the multiple levels introduced in Chapter 2 to investigate the nature of classical cognitive science (Chapter 3), connectionist cognitive science (Chapter 4), and embodied cognitive science (Chapter 5). Can the multiple levels of investigation be used to reveal principles that unify these three different and frequently mutually antagonistic approaches? Or is modern cognitive science beginning to fracture in a fashion similar to what has been observed in experimental psychology?

    This page titled 2.13: Levelling the Field is shared under a CC BY-NC-ND license and was authored, remixed, and/or curated by Michael R. W. Dawson (Athabasca University Press) .

    • Was this article helpful?