Skip to main content
Social Sci LibreTexts

3.8: Componentiality, Computability, and Cognition

  • Page ID
    35719
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    In 1840, computer pioneer Charles Babbage displayed a portrait of loom inventor Joseph Marie Jacquard for the guests at the famous parties in his home (Essinger, 2004). The small portrait was incredibly detailed. Babbage took great pleasure in the fact that most people who first saw the portrait mistook it to be an engraving. It was instead an intricate fabric woven on a loom of the type that Jacquard himself invented.

    The amazing detail of the portrait was the result of its being composed of 24,000 rows of weaving. In a Jacquard loom, punched cards determined which threads would be raised (and therefore visible) for each row in the fabric. Each thread in the loom was attached to a rod; a hole in the punched card permitted a rod to move, raising its thread. The complexity of the Jacquard portrait was produced by using 24,000 punched cards to control the loom.

    Though Jacquard’s portrait was impressively complicated, the process used to create it was mechanical, simple, repetitive—and local. With each pass of the loom’s shuttle, weaving a set of threads together into a row, the only function of a punched card was to manipulate rods. In other words, each punched card only controlled small components of the overall pattern. While the entire set of punched cards represented the total pattern to be produced, this total pattern was neither contained in, nor required by, an individual punched card as it manipulated the loom’s rods. The portrait of Jacquard was a global pattern that emerged from a long sequence of simple, local operations on the pattern’s components.

    In the Jacquard loom, punched cards control processes that operate on local components of the “expression” being weaved. The same is true of the physical symbol systems. Physical symbol systems are finite devices that are capable of producing an infinite variety of potential behaviour. This is possible because the operations of a physical symbol system are recursive. However, this explanation is not complete. In addition, the rules of a physical symbol system are local or componential, in the sense that they act on local components of an expression, not on the expression as a whole.

    For instance, one definition of a language is the set of all of its grammatical expressions (Chomsky, 1957). Given this definition, it is logically possible to treat each expression in the set as an unanalyzed whole to which some operation could be applied. This is one way to interpret a behaviourist theory of language (Skinner, 1957): each expression in the set is a holistic verbal behaviour whose likelihood of being produced is a result of reinforcement and stimulus control of the expression as a whole.

    However, physical symbol systems do not treat expressions as unanalyzed wholes. Instead, the recursive rules of a physical symbol system are sensitive to the atomic symbols from which expressions are composed. We saw this previously in the example of context-free grammars that were used to construct the phrase markers of Figures 3-6 and 3-7. The rules in such grammars do not process whole phrase markers, but instead operate on the different components (e.g., nodes like S, N, VP) from which a complete phrase marker is constructed.

    The advantage of operating on symbolic components, and not on whole expressions, is that one can use a sequence of very basic operations—writing, changing, erasing, or copying a symbol—to create an overall effect of far greater scope than might be expected. As Henry Ford said, nothing is particularly hard if you divide it into small jobs. We saw the importance of this in Chapter 2 when we discussed Leibniz’ mill (Leibniz, 1902), the Chinese room (Searle, 1980), and the discharging of homunculi (Dennett, 1978). In a materialist account of cognition, thought is produced by a set of apparently simple, mindless, unintelligent actions—the primitives that make up the architecture.

    The small jobs carried out by a physical symbol system reveal that such a system has a dual nature (Haugeland, 1985). On the one hand, symbol manipulations are purely syntactic—they depend upon identifying a symbol’s type, and not upon semantically interpreting what the symbol stands for. On the other hand, a physical symbol system’s manipulations are semantic—symbol manipulations preserve meanings, and can be used to derive new, sensible interpretations.

    Interpreted formal tokens lead two lives: syntactical lives, in which they are meaningless markers, moved according to the rules of some self-contained game; and semantic lives, in which they have meanings and symbolic relations to the outside world. (Haugeland, 1985, p. 100)

    Let us briefly consider these two lives. First, we have noted that the rules of a physical symbol system operate on symbolic components of a whole expression. For this to occur, all that is required is that a rule identifies a particular physical entity as being a token or symbol of a particular type. If the symbol is of the right type, then the rule can act upon it in some prescribed way.

    For example, imagine a computer program that is playing chess. For this program, the “whole expression” is the total arrangement of game pieces on the chess board at any given time. The program analyzes this expression into its components: individual tokens on individual squares of the board. The physical characteristics of each component token can then be used to identify to what symbol class it belongs: queen, knight, bishop, and so on. Once a token has been classified in this way, appropriate operations can be applied to it. If a game piece has been identified as being a “knight,” then only knight moves can be applied to it—the operations that would move the piece like a bishop cannot be applied, because the token has not been identified as being of the type “bishop.”

    Similar syntactic operations are at the heart of a computing device like a Turing machine. When the machine head reads a cell on the ticker tape (another example of componentiality!), it uses the physical markings on the tape to determine that the cell holds a symbol of a particular type. This identification—in conjunction with the current physical state of the machine head—is sufficient to determine which instruction to execute.

    To summarize, physical symbol systems are syntactic in the sense that their rules are applied to symbols that have been identified as being of a particular type on the basis of their physical shape or form. Because the shape or form of symbols is all that matters for the operations to be successfully carried out, it is natural to call such systems formal. Formal operations are sensitive to the shape or form of individual symbols, and are not sensitive to the semantic content associated with the symbols.

    However, it is still the case that formal systems can produce meaningful expressions. The punched cards of a Jacquard loom only manipulate the positions of thread-controlling rods. Yet these operations can produce an intricate woven pattern such as Jacquard’s portrait. The machine head of a Turing machine reads and writes individual symbols on a ticker tape. Yet these operations permit this device to provide answers to any computable question. How is it possible for formal systems to preserve or create semantic content?

    In order for the operations of a physical symbol system to be meaningful, two properties must be true. First, the symbolic structures operated on must have semantic content. That is, the expressions being manipulated must have some relationship to states of the external world that permits the expressions to represent these states. This relationship is a basic property of a physical symbol system, and is called designation (Newell, 1980; Newell & Simon, 1976). “An expression designates an object if, given the expression, the system can either affect the object itself or behave in ways dependent on the object” (Newell & Simon, 1976, p. 116).

    Explaining designation is a controversial issue in cognitive science and philosophy. There are many different proposals for how designation, which is also called the problem of representation (Cummins, 1989) or the symbol grounding problem (Harnad, 1990), occurs. The physical symbol system hypothesis does not propose a solution, but necessarily assumes that such a solution exists. This assumption is plausible to the extent that computers serve as existence proofs that designation is possible.

    The second semantic property of a physical symbol system is that not only are individual expressions meaningful (via designation), but the evolution of expressions—the rule-governed transition from one expression to another—is also meaningful. That is, when some operation modifies an expression, this modification is not only syntactically correct, but it will also make sense semantically. As rules modify symbolic structures, they preserve meanings in the domain that the symbolic structures designate, even though the rules themselves are purely formal. The application of a rule should not produce an expression that is meaningless. This leads to what is known as the formalist’s motto: “If you take care of the syntax, then the semantics will take care of itself ” (Haugeland, 1985, p. 106).

    The assumption that applying a physical symbol system’s rules preserves meaning is a natural consequence of classical cognitive science’s commitment to logicism. According to logicism, thinking is analogous to using formal methods to derive a proof, as is done in logic or mathematics. In these formal systems, when one applies rules of the system to true expressions (e.g., the axioms of a system of mathematics which by definition are assumed to be true [Davis & Hersh, 1981]), the resulting expressions must also be true. An expression’s truth is a critical component of its semantic content.

    It is necessary, then, for the operations of a formal system to be defined in such a way that 1) they only detect the form of component symbols, and 2) they are constrained in such a way that manipulations of expressions are meaningful (e.g., truth preserving). This results in classical cognitive science’s interest in universal machines.

    A universal machine is a device that is maximally flexible in two senses (Newell, 1980). First, its behaviour is responsive to its inputs; a change in inputs will be capable of producing a change in behaviour. Second, a universal machine must be able compute the widest variety of input-output functions that is possible. This “widest variety” is known as the set of computable functions.

    A device that can compute every possible input-output function does not exist. The Turing machine was invented and used to prove that there exist some functions that are not computable (Turing, 1936). However, the subset of functions that are computable is large and important:

    It can be proved mathematically that there are infinitely more functions than programs. Therefore, for most functions there is no corresponding program that can compute them. . . . Fortunately, almost all these noncomputable functions are useless, and virtually all the functions we might want to compute are computable. (Hillis, 1998, p. 71)

    A major discovery of the twentieth century was that a number of seemingly different symbol manipulators were all identical in the sense that they all could compute the same maximal class of input-output pairings (i.e., the computable functions). Because of this discovery, these different proposals are all grouped together into the class “universal machine,” which is sometimes called the “effectively computable procedures.” This class is “a large zoo of different formulations” that includes “Turing machines, recursive functions, Post canonical systems, Markov algorithms, all varieties of general purpose digital computers, [and] most programming languages” (Newell, 1980, p. 150).

    Newell (1980) proved that a generic physical symbol system was also a universal machine. This proof, coupled with the physical symbol system hypothesis, leads to a general assumption in classical cognitive science: cognition is computation, the brain implements a universal machine, and the products of human cognition belong to the class of computable functions.

    The claim that human cognition is produced by a physical symbol system is a scientific hypothesis. Evaluating the validity of this hypothesis requires fleshing out many additional details. What is the organization of the program that defines the physical symbol system for cognition (Newell & Simon, 1972)? In particular, what kinds of symbols and expressions are being manipulated? What primitive operations are responsible for performing symbol manipulation? How are these operations controlled? Classical cognitive science is in the business of fleshing out these details, being guided at all times by the physical symbol system hypothesis.


    This page titled 3.8: Componentiality, Computability, and Cognition is shared under a CC BY-NC-ND license and was authored, remixed, and/or curated by Michael R. W. Dawson (Athabasca University Press) .

    • Was this article helpful?