Skip to main content
Social Sci LibreTexts

7.4: Centralized versus Decentralized Control

  • Page ID
    21245
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Two of the key elements of a classical theory of cognitive science are a set of primitive symbols and a set of primitive processes for symbol manipulation. However, these two necessary components are not by themselves sufficient to completely define a working classical model. A third element is also required: a mechanism of control.

    Control is required to determine “what to do next,” to choose which primitive operation is to be applied at any given moment.

    Beyond the capability to execute the basic operations singly, a computing machine must be able to perform them according to the sequence—or rather, the logical pattern—in which they generate the solution of the mathematical problem that is the actual purpose of the calculation in hand. (von Neumann, 1958, p. 11)

    The purpose of this section is to explore the notion of control from the perspective of the three schools of thought in cognitive science. This is done by considering cognitive control in the context of the history of the automatic control of computing devices. It is argued that while the different approaches in cognitive science may claim to have very different accounts of cognitive control, there are in fact no qualitative differences amongst these accounts.

    One of the earliest examples of automatic control was Jacquard’s punched card mechanism for, in essence, programming a loom to weave a particular pattern into silk fabric (Essinger, 2004), as discussed in Chapter 3. One punched card controlled the appearance of one thread row in the fabric. Holes punched in the card permitted rods to move, which raised specified threads to make them visible at this point in the fabric. The cards that defined a pattern were linked together as a belt that advanced one card at a time during weaving. A typical pattern to be woven was defined by around 2,000 to 4,000 different punched cards; very complex patterns required using many more cards. For instance, Jacquard’s self-portrait in silk was defined by 24,000 different punched cards.

    Jacquard patented his loom in 1804 (Essinger, 2004). By the end of the nineteenth century, punched cards inspired by his invention had a central place in the processing of information. However, their role was to represent this information, not to control how it was manipulated.

    After Herman Hollerith graduated from Columbia School of Mines in 1879, he was employed to work on the 1880 United States Census, which was the first census to collect not only population data but also to be concerned with economic issues (Essinger, 2004). Hollerith’s census experience revealed a marked need to automate the processing of the huge amount of information that had been collected.

    While engaged in work upon the tenth census, the writer’s attention was called to the methods employed in the tabulation of population statistics and the enormous expense involved. These methods were at the time described as ‘barbarous[;] some machine ought to be devised for the purpose of facilitating such tabulations’. (Hollerith, 1889, p. 239)

    Hollerith’s response was to represent census information using punched cards (Austrian, 1982; Comrie, 1933; Hollerith, 1889). A standard punched card, called a tabulating card, measured 18.7 cm by 8.3 cm, and its upper left hand corner was beveled to prevent the card from being incorrectly oriented. A blank tabulating card consisted of 80 vertical columns, with 12 different positions in each column through which a hole could be punched. The card itself acted as an electrical insulator and was passed through a wire brush and a brass roller. The brush and roller came in contact wherever a hole had been punched, completing an electrical circuit and permitting specific information to be read from a card and acted upon (Eckert, 1940).

    Hollerith invented a set of different devices for manipulating tabulating cards. These included a card punch for entering data by punching holes in cards, a verifier for checking for data entry errors, a counting sorter for sorting cards into different groups according to the information punched in any column of interest, a tabulator or accounting machine for adding numbers punched into a set of cards, and a multiplier for taking two different numbers punched on a card, computing their product, and punching the product onto the same card. Hollerith’s devices were employed during the 1890 census. They saved more than two years of work and $5 million dollars, and permitted complicated tables involving relationships between different variables to be easily created (Essinger, 2004).

    In Hollerith’s system, punched cards represented information, and the various specialized devices that he invented served as the primitive processes available for manipulating information. Control, however, was not mechanized—it was provided by a human operator of the various tabulating machines in a room. “The calculating process was done by passing decks of cards from one machine to the next, with each machine contributing something to the process” (Williams, 1997, p. 253). This approach was very powerful. In what has been described as the first book about computer programming, Punched Card Methods in Scientific Computation (Eckert, 1940), astronomer Wallace Eckert described how a set of Hollerith’s machines—a punched card installation—could be employed for harmonic analysis, for solving differential equations, for computing planetary perturbations, and for performing many other complex calculations.

    The human controller of a punched card installation was in a position analogous to a weaver in Lyon prior to the invention of Jacquard’s loom. That is, both were human operators—or more precisely, human controllers—of machines responsible for producing complicated products. Jacquard revolutionized the silk industry by automating the control of looms. Modern computing devices arose from an analogous innovation, automating the control of Hollerith’s tabulators (Ceruzzi, 1997, p. 8): The entire room comprising a punched card installation “including the people in it—and not the individual machines is what the electronic computer eventually replaced.”

    The first phase of the history of replacing punched card installations with automatically controlled computing devices involved the creation of calculating devices that employed mechanical, electromechanical, or relay technology (Williams, 1997). This phase began in the 1930s with the creation of the German calculators invented by Konrad Zuse (Zuse, 1993), the Bell relay computers developed by George Stibitz (Irvine, 2001; Stibitz & Loveday, 1967a, 1967b), and the Harvard machines designed by Howard Aiken (Aiken & Hopper, 1946).

    The internal components of any one of these calculators performed operations analogous to those performed by the different Hollerith machines in a punched card installation. In addition, the actions of these internal components were automatically controlled. Completing the parallel with the Jacquard loom, this control was accomplished using punched tape or cards. The various Stibitz and Aiken machines read spools of punched paper tape; Zuse’s machines were controlled by holes punched in discarded 35 mm movie film (Williams, 1997). The calculators developed during this era by IBM, a company that had been founded in part from Hollerith’s Computer Tabulating Recording Company, were controlled by decks of punched cards (Williams, 1997).

    In the 1940s, electromechanical or relay technology was replaced with much faster electronic components, leading to the next generation of computer devices. Vacuum tubes were key elements of both the Atanasoff-Berry computer (ABC), created by John Atanasoff and Clifford Berry (Burks & Burks, 1988; Mollenhoff, 1988; Smiley, 2010), and the ENIAC (Electronic Numerical Integrator and Computer) engineered by Presper Eckert and John Mauchly (Burks, 2002; Neukom, 2006).

    The increase in speed of the internal components of electronic computers caused problems with paper tape or punched card control. The issue was that the electronic machines were 500 times faster than relay-based devices (Pelaez, 1999), which meant that traditional forms of control were far too slow.

    This control problem was solved for Eckert and Mauchly’s ENIAC by using a master controller that itself was an electronic device. It was a set of ten electronic switches that could each be set to six different values; each switch was associated with a counter that could be used to advance a switch to a new setting when a predefined value was reached (Williams, 1997). The switches would route incoming signals to particular components of ENIAC, where computations were performed; a change in a switch’s state would send information to a different component of ENIAC. The control of this information flow was accomplished by using a plug board to physically wire the connections between switches and computer components. This permitted control to match the speed of computation, but at a cost:

    ENIAC was a fast but relatively inflexible machine. It was best suited for use in long and repetitious calculations. Once it was wired up for a particular program, it was in fact a special purpose machine. Adapting it to another purpose (a different problem) required manual intervention to reconfigure the electrical circuits. (Pelaez, 1999, p. 361)

    Typically two full days of rewiring the plug board were required to convert ENIAC from one special purpose machine to another.

    Thus the development of electronic computers led to a crisis of control. Punched tape provided flexible, easily changed, control. However, punched tape readers were too slow to take practical advantage of the speed of the new machines. Plug boards provided control that matched the speed of the new componentry, but was inflexible and time consuming to change. This crisis of control inspired another innovation, the stored program computer (Aspray, 1982; Ceruzzi, 1997; Pelaez, 1999).

    The notion of the stored program computer was first laid out in 1945 by John von Neumann in a draft memo that described the properties of the EDVAC (Electronic Discrete Variable Automatic Computer), the computer that directly descended from the ENIAC (Godfrey & Hendry, 1993; von Neumann, 1993). One of the innovations of this design was the inclusion of a central controller. In essence, the instructions that ordinarily would be represented as a sequence on a punched tape would instead be represented internally in EDVAC’s memory. The central controller had the task of fetching, interpreting, and executing an instruction from memory and then repeating this process after proceeding to the next instruction in the sequence.

    There is no clear agreement about which particular device was the first stored program computer; several candidate machines were created in the same era. These include the EDVAC (created 1945–1950) (Reitwiesner, 1997; von Neumann, 1993; Williams, 1993), Princeton’s IAS computer (created 1946–1951) (Burks, 2002; Cohen, 1999), and the Manchester machine (running in 1948) (Copeland, 2011; Lavington, 1980). Later work on the ENIAC also explored its use of stored programs (Neukom, 2006). Regardless of “firsts,” all of these machines were functionally equivalent in the sense that they replaced external control—as by a punched tape—with internalizing tape instructions into memory.

    The invention of the stored program computer led directly to computer science’s version of the classical sandwich (Hurley, 2001). “Sensing” involves loading the computer’s internal memory with both the program and the data to be processed. “Thinking” involves executing the program and performing the desired calculations upon the stored data. “Acting” involves providing the results of the calculations to the computer’s operator, for instance by punching an output tape or a set of punched cards.

    The classical sandwich is one of the defining characteristics of classical cognitive science (Hurley, 2001), and the proposal of a sense-act cycle to replace the sandwich’s sense-think-act processing (Brooks, 1999, 2002; Clark, 1997, 2008; Pfeifer & Scheier, 1999) is one of the characteristic reactions of embodied cognitive science against the classical tradition (Shapiro, 2011). Classical cognitive science’s adoption of the classical sandwich was a natural consequence of being inspired by computer science’s approach to information processing, which, at the time that classical cognitive science was born, had culminated in the invention of the stored program computer.

    However, we have seen from the history leading up to its invention that the stored program computer—and hence the classical sandwich—was not an in-principle requirement for information processing. It was instead the result of a practical need to match the speed of control with the speed of electronic components. In fact, the control mechanisms of a variety of information processing models that are central to classical cognitive science are in fact quite consistent with embodied cognitive science.

    For example, the universal Turing machine is critically important to classical cognitive science, not only in its role of defining the core elements of symbol manipulation, but also in its function of defining the limits of computation (Dawson, 1998). However, in most respects a universal Turing machine is a device that highlights some of the key characteristics of the embodied approach.

    For instance, the universal Turing machine is certainly not a stored program computer (Wells, 2002). If one were to actually build such a device—the original was only used as a theoretical model (Turing, 1936)—then the only internal memory that would be required would be for holding the machine table and the machine head’s internal state. (That is, if any internal memory was required at all. Turing’s notion of machine state was inspired by the different states of a typewriter’s keys [Hodges, 1983], and thus a machine state may not be remembered or represented, but rather merely adopted. Similarly, the machine table would presumably be built from physical circuitry, and again would be neither represented nor remembered). The program executed by a universal Turing machine, and the data manipulations that resulted, were completely scaffolded. The machine’s memory is literally an external notebook analogous to that used by Oscar in the famous argument for extending the mind (Clark & Chalmers, 1998). That is, the data and program for a universal Turing machine are both stored externally, on the machine’s ticker tape.

    Indeed, the interactions between a universal Turing machine’s machine head and its ticker tape are decidedly of the sense-act, and not of the sense-think-act, variety. Every possible operation in the machine table performs an action (either writing something on the ticker tape or moving the tape one cell to the right or to the left) immediately after sensing the current symbol on the tape and the current state of the machine head. No other internal, intermediary processing (i.e., thinking) is required.

    Similarly, external scaffolding was characteristic of later-generation relay computers developed at Bell labs, such as the Mark III. These machines employed more than one tape reader, permitting external tapes to be used to store tables of precomputed values. This resulted in the CADET architecture (“Can’t Add, Doesn’t Even Try”) that worked by looking up answers to addition and other problems instead of computing the result (Williams, 1997). This was possible because of a “hunting circuit” that permitted the computer to move to any desired location on a punched tape (Stibitz & Loveday, 1967b). ENIAC employed scaffolding as well, obtaining standard function values by reading them from cards (Williams, 1997).

    From an engineering perspective, the difference between externally controlled and stored program computers was quantitative (e.g. speed of processing) and not qualitative (e.g. type of processing). In other words, to a computer engineer there may be no principled difference between a sense-act device such as a universal Turing machine and a sense-think-act computer such as the EDVAC. In the context of cognitive control, then, there may be no qualitative element that distinguishes the classical and embodied approaches.

    Perhaps a different perspective on control may reveal sharp distinctions between classical and embodied cognitive science. For instance, a key element in the 1945 description of the EDVAC was the component called the central control unit (Godfrey & Hendry, 1993; von Neumann, 1993). It was argued by von Neumann that the most efficient way to control a stored program computer was to have a physical component of the device devoted to control (i.e., to the fetching, decoding, and executing of program steps). Von Neumann called this the “central control organ.” Perhaps it is the notion that control is centralized to a particular location or organ of a classical device that serves as the division between classical and embodied models. For instance, behaviour-based roboticists often strive to decentralize control (Brooks, 1999). In Brooks’ early six-legged walking robots like Attila, each leg of the robot was responsible for its own control, and no central control organ was included in the design (Brooks, 2002).

    However, it appears that the need for a central control organ was tied again to pragmatic engineering rather than to a principled requirement for defining information processing. The adoption of a central controller reflected adherence to engineering’s principle of modular design (Marr, 1976). According to this principle, “any large computation should be split up and implemented as a collection of small sub-parts that are as nearly independent of one another as the overall task allows” (p. 485). Failure to devise a functional component or process according to the principle of modular design typically means,

    that the process as a whole becomes extremely difficult to debug or to improve, whether by a human designer or in the course of natural evolution, because a small change to improve one part has to be accompanied by many simultaneous compensating changes elsewhere. (Marr, 1976, p. 485)

    Digital computers were explicitly designed according to the principle of modular design, which von Neumann (1958) called “the principle of only one organ for each basic operation” (p. 13). Not only was this good engineering practice, but von Neumann also argued that this principle distinguished digital computers from their analog ancestors such as the differential analyzer (Bush, 1931).

    The principle of modular design is also reflected in the architecture of the universal Turing machine. The central control organ of this device is its machine table (see Figure 3-8), which is separate and independent from the other elements of the device, such as the mechanisms for reading and writing the tape, the machine state, and so on. Recall that the machine table is a set of instructions; each instruction is associated with a specific input symbol and a particular machine state. When a Turing machine in physical state x reads symbol y from the tape, it proceeds to execute the instruction at coordinates (x, y) in its machine table.

    Importantly, completely decentralized control results in a Turing machine when von Neumann’s (1958) principle of only one organ for each basic operation is taken to the extreme. Rather than taking the entire machine table as a central control organ, one could plausibly design an uber-modular system in which each instruction was associated with its own organ. For example, one could replace the machine table with a production system in which each production was responsible for one of the machine table’s entries. The conditions for each production would be a particular machine state and a particular input symbol, and the production’s action would be the required manipulation of the ticker tape. In this case, the production system version of the Turing machine would behave identically to the original version. However, it would no longer have a centralized control organ.

    In short, central control is not a necessary characteristic of classical information processing, and therefore does not distinguish between classical and embodied theories. Another way of making this point is to remember the Chapter 3 observation that production systems are prototypical examples of classical architectures (Anderson et al., 2004; Newell, 1973), but they, like many embodied models (Dawson, Dupuis, & Wilson, 2010; Holland & Melhuish, 1999; Susi & Ziemke, 2001; Theraulaz & Bonabeau, 1999), are controlled stigmergically. “Traditional production system control is internally stigmergic, because the contents of working memory determine which production will act at any given time” (Dawson, Dupuis, & Wilson, 2010, p. 76).

    The discussion to this point has used the history of the automatic control of computers to argue that characteristics of control cannot be used to provide a principled distinction between classical and embodied cognitive science. Let us now examine connectionist cognitive science in the context of cognitive control.

    Connectionists have argued that the nature of cognitive control provides a principled distinction between network models and models that belong to the classical tradition (Rumelhart & McClelland, 1986b). In particular, connectionist cognitive scientists claim that control in their networks is completely decentralized, and that this property is advantageous because it is biologically plausible. “There is one final aspect of our models which is vaguely derived from our understanding of brain functioning. This is the notion that there is no central executive overseeing the general flow of processing” (Rumelhart & McClelland, 1986b, p. 134).

    However, the claim that connectionist networks are not under central control is easily refuted; Dawson and Schopflocher (1992a) considered a very simple connectionist system, the distributed memory or standard pattern associator described in Chapter 4 (see Figure 4.2.1). They noted that connectionist researchers typically describe such models as being autonomous, suggesting that the key operations of such a memory (namely learning and recall) are explicitly defined in its architecture, that is, in the connection weights and processors, as depicted in Figure 4.2.1.

    However, Dawson and Schopflocher (1992a) proceeded to show that even in such a simple memory system, whether the network learns or recalls information depends upon instructions provided by an external controller: the programmer demonstrating the behaviour of the network. When instructed to learn, the components of the standard pattern associator behave one way. However, when instructed to recall, these same components behave in a very different fashion. The nature of the network’s processing depends critically upon signals provided by a controller that is not part of the network architecture.

    For example, during learning the output units in a standard pattern associator serve as a second bank of input units, but during recall they record the network’s response to signals sent from the other input units. How the output units behave is determined by whether the network is involved in either a learning phase or a recall phase, which is signaled by the network’s user, not by any of its architectural components. Similarly, during the learning phase connection weights are modified according to a learning rule, but the weights are not modified during the recall phase. How the weights behave is under the user’s control. Indeed, the learning rule is defined outside the architecture of the network that is visible in Figure 4.2.1.

    Dawson and Schopflocher (1992a) concluded that,

    current PDP networks are not autonomous because their learning principles are not in fact directly realized in the network architecture. That is, networks governed by these principles require explicit signals from some external controller to determine when they will learn or when they will perform a learned task. (Dawson and Schopflocher 1992a, pp. 200–201)

    This is not a principled limitation, for Dawson and Schopflocher presented a much more elaborate architecture that permits a standard pattern associator to learn and recall autonomously, that is, without the need for a user’s intervention. However, this architecture is not typical; standard pattern associators like the one in Figure 4.2.1 demand executive control.

    The need for such control is not limited to simple distributed memories. The same is true for a variety of popular and more powerful multilayered network architectures, including multilayered perceptrons and self-organizing networks (Roy, 2008). “There is clearly a central executive that oversees the operation of the back-propagation algorithm” (p. 1436). Roy (2008) proceeded to argue that such control is itself required by brain-like systems, and therefore biologically plausible networks demand not only an explicit account of data transformation, but also a biological theory of executive control.

    In summary, connectionist networks generally require the same kind of control that is a typical component of a classical model. Furthermore, it was argued earlier that there does not appear to be any principled distinction between this kind of control and the type that is presumed in an embodied account of cognition. Control is a key characteristic of a cognitive theory, and different schools of thought in cognitive science are united in appealing to the same type of control mechanisms. In short, central control is not a mark of the classical.


    This page titled 7.4: Centralized versus Decentralized Control is shared under a CC BY-NC-ND license and was authored, remixed, and/or curated by Michael R. W. Dawson (Athabasca University Press) .

    • Was this article helpful?