Special-purpose logic machines had been developed by philosophers in the late nineteenth century (Buck & Hunka, 1999; Jevons, 1870; Marquand, 1885). However, abstract descriptions of how devices could perform general-purpose symbol manipulation did not arise until the 1930s (Post, 1936; Turing, 1936). The basic properties laid out in these mathematical theories of computation define what is now known as a physical symbol system (Newell, 1980; Newell & Simon, 1976). The concept physical symbol system defines “a broad class of systems that is capable of having and manipulating symbols, yet is also realizable within our physical universe” (Newell, 1980, p. 136).
A physical symbol system operates on a finite set of physical tokens called symbols. These are components of a larger physical entity called a symbol structure or a symbolic expression. It also consists of a set of operators that can create, modify, duplicate, or destroy symbols. Some sort of control is also required to select at any given time some operation to apply. A physical symbol system produces, over time, an evolving or changing collection of expressions. These expressions represent or designate entities in the world (Newell, 1980; Newell & Simon, 1976). As a result, the symbol manipulations performed by such a device permit new meanings to be derived, in the same way as new knowledge is arrived at in the proofs discovered by logicians and mathematicians (Davis & Hersh, 1981).
The abstract theories that describe physical symbol systems were not developed into working artifacts until nearly the midpoint of the twentieth century. “Our deepest insights into information processing were achieved in the thirties, before modern computers came into being. It is a tribute to the genius of Alan Turing” (Newell & Simon, 1976, p. 117). The first digital computer was the Z3, invented in Germany in 1941 by Konrad Zuse (1993). In the United States, the earliest computers were University of Pennsylvania’s ENIAC (created 1943–1946) and EDVAC (created 1945–1950), Harvard’s MARK I (created 1944), and Princeton’s IAS or von Neumann computer (created 1946–1951) (Burks, 2002; Cohen, 1999). The earliest British computer was University of Manchester’s “Baby,” the small-scale experimental machine (SSEM) that was first activated in June, 1948 (Lavington, 1980).
Although specific details vary from machine to machine, all digital computers share three general characteristics (von Neumann, 1958). First, they have a memory for the storage of symbolic structures. In what is now known as the von Neumann architecture, this is a random access memory (RAM) in which any memory location can be immediately accessed—without having to scroll through other locations, as in a Turing machine—by using the memory’s address. Second, they have a mechanism separate from memory that is responsible for the operations that manipulate stored symbolic structures. Third, they have a controller for determining which operation to perform at any given time. In the von Neumann architecture, the control mechanism imposes serial processing, because only one operation will be performed at a time.
Perhaps the earliest example of serial control is the nineteenth-century punched cards used to govern the patterns in silk that were woven by Joseph Marie Jacquard’s loom (Essinger, 2004). During weaving, at each pass of the loom’s shuttle, holes in a card permitted some thread-controlling rods to be moved. When a rod moved, the thread that it controlled was raised; this caused the thread to be visible in that row of the pattern. A sequence of cards was created by tying cards together end to end. When this “chain” was advanced to the next card, the rods would be altered to create the appropriate appearance for the silk pattern’s next row.
The use of punched cards turned the Jacquard loom into a kind of universal machine: one changed the pattern being produced not by changing the loom, but simply by loading it with a different set of punched cards. Thus not only did Jacquard invent a new loom, but he also invented the idea of using a program to control the actions of a machine. Jacquard’s program was, of course, a sequence of punched cards. Their potential for being applied to computing devices in general was recognized by computer pioneer Charles Babbage, who was inspired by Jacquard’s invention (Essinger, 2004).
By the late 1950s, it became conventional to load the program—then known as the “short code” (von Neumann, 1958)—into memory. This is called memory-stored control; the first modern computer to use this type of control was Manchester’s “Baby” (Lavington, 1980). In Chapter 2 we saw an example of this type of control in the universal Turing machine, whose ticker tape memory holds both the data to be manipulated and the description of a special-purpose Turing machine that will do the manipulating. The universal Turing machine uses the description to permit it to pretend to be the specific machine that is defined on its tape (Hodges, 1983).
In a physical symbol system that employs memory-stored control, internal characteristics will vary over time. However, the time scale of these changes will not be uniform (Newell, 1990). The data that is stored in memory will likely be changed rapidly. However, some stored information—in particular, the short code, or what cognitive scientists would call the virtual machine (Pylyshyn, 1984, 1991), that controls processing would be expected to be more persistent. Memory-stored control in turn chooses which architectural operation to invoke at any given time. In a digital computer, the architecture would not be expected to vary over time at all because it is fixed, that is, literally built into the computing device.
The different characteristics of a physical symbol system provide a direct link back to the multiple levels of investigation that were the topic of Chapter 2. When such a device operates, it is either computing some function or solving some information processing problem. Describing this aspect of the system is the role of a computational analysis. The computation being carried out is controlled by an algorithm: the program stored in memory. Accounting for this aspect of the system is the aim of an algorithmic analysis. Ultimately, a stored program results in the device executing a primitive operation on a symbolic expression stored in memory. Identifying the primitive processes and symbols is the domain of an architectural analysis. Because the device is a physical symbol system, primitive processes and symbols must be physically realized. Detailing the physical nature of these components is the goal of an implementational analysis.
The invention of the digital computer was necessary for the advent of classical cognitive science. First, computers are general symbol manipulators. Their existence demonstrated that finite devices could generate an infinite potential of symbolic behaviour, and thus supported a materialist alternative to Cartesian dualism. Second, the characteristics of computers, and of the abstract theories of computation that led to their development, in turn resulted in the general notion of physical symbol system, and the multiple levels of investigation that such systems require.
The final link in the chain connecting computers to classical cognitive science is the logicist assumption that cognition is a rule-governed symbol manipulation of the sort that a physical symbol system is designed to carry out. This produces the physical symbol system hypothesis: “the necessary and sufficient condition for a physical system to exhibit general intelligent action is that it be a physical symbol system” (Newell, 1980, p. 170). By necessary, Newell meant that if an artifact exhibits general intelligence, then it must be an instance of a physical symbol system. By sufficient, Newell claimed that any device that is a physical symbol system can be configured to exhibit general intelligent action—that is, he claimed the plausibility of machine intelligence, a position that Descartes denied.
What did Newell (1980) mean by general intelligent action? He meant,
the same scope of intelligence seen in human action: that in real situations behavior appropriate to the ends of the system and adaptive to the demands of the environment can occur, within some physical limits. (Newell, 1980, p. 170)
In other words, human cognition must be the product of a physical symbol system. Thus human cognition must be explained by adopting all of the different levels of investigation that were described in Chapter 2.