Skip to main content
Social Sci LibreTexts

7.5: Serial versus Parallel Processing

  • Page ID
    21246
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Classical cognitive science was inspired by the characteristics of digital computers; few would deny that the classical approach exploits the digital computer metaphor (Pylyshyn, 1979a). Computers are existence proofs that physical machines are capable of manipulating, with infinite flexibility, semantically interpretable expressions (Haugeland, 1985; Newell, 1980; Newell & Simon, 1976). Computers illustrate how logicism can be grounded in physical mechanisms.

    The connectionist and the embodied reactions to classical cognitive science typically hold that the digital computer metaphor is not appropriate for theories of cognition. It has been argued that the operations of traditional electronic computers are qualitatively different from those of human cognition, and as a result the classical models they inspire are doomed to fail, as are attempts to produce artificial intelligence in such machines (Churchland & Sejnowski, 1992; Dreyfus, 1972, 1992; Searle, 1980).

    In concert with rejecting the digital computer metaphor, connectionist and embodied cognitive scientists turn to qualitatively different notions in an attempt to distinguish their approaches from the classical theories that preceded them. However, their attempt to define the mark of the classical, and to show how this mark does not apply to their theories, is not always successful.

    For example, it was argued in the previous section that when scholars abandoned the notion of centralized control, they were in fact reacting against a concept that was not a necessary condition of classical theory, but was instead an engineering convenience. Furthermore, mechanisms of control in connectionist and embodied theories were shown not to be radically different from those of classical models. The current section provides another such example.

    One of the defining characteristics of classical theory is serial processing, the notion that only one operation can be executed at a time. Opponents of classical cognitive science have argued that this means classical models are simply too slow to be executed by the sluggish hardware that makes up the brain (Feldman & Ballard, 1982). They suggest that what is instead required is parallel processing, in which many operations are carried out simultaneously. Below it is argued that characterizing digital computers or classical theories as being serial in nature is not completely accurate. Furthermore, characterizing alternative schools of thought in cognitive science as champions of parallel processing is also problematic. In short, the difference between serial and parallel processing may not provide a clear distinction between different approaches to cognitive science.

    It cannot be denied that serial processing has played an important role in the history of modern computing devices. Turing’s (1936) original account of computation was purely serial: a Turing machine processed only a single symbol at a time, and did so by only executing a single operation at a time. However, the purpose of Turing’s proposal was to provide an uncontroversial notion of “definite method”; serial processing made Turing’s notion of computation easy to understand, but was not a necessary characteristic.

    A decade later, the pioneering stored program computer EDVAC was also a serial device in two different ways (Ceruzzi, 1997; von Neumann, 1993). First, it only executed one command at a time. Second, even though it used 44 bits to represent a number as a “word,” it processed these words serially, operating on them one bit at a time. Again, though, this design was motivated by a desire for simplicity—in this case, simplicity of engineering. “The device should be as simple as possible, that is, contain as few elements as possible. This can be achieved by never performing two operations simultaneously, if this would cause a significant increase in the number of elements required” (von Neumann, 1993, p. 8).

    Furthermore, the serial nature of EDVAC was also dictated by engineering constraints on the early stored program machines. The existence of such devices depended upon the invention of new kinds of memory components (Williams, 1997). EDVAC used a delay line memory system, which worked by delaying a series of pulses (which represented a binary number) for a few milliseconds, and then by feeding these pulses back into the delay line so that they persisted in memory. Crucially, delay line memories only permitted stored information to be accessed in serial, one bit at a time.

    EDVAC’s simple, serial design reflected an explicit decision against parallel processing that von Neumann (1993) called telescoping processes.

    It is also worth emphasizing that up to now all thinking about high speed digital computing devices has tended in the opposite direction: Towards acceleration by telescoping processes at the price of multiplying the number of elements required. It would therefore seem to be more instructive to try to think out as completely as possible the opposite viewpoint. (von Neumann, 1993, p. 8)

    EDVAC’s opposite viewpoint was only practical because of the high speed of its vacuum tube components.

    Serial processing was an attractive design decision because it simplified the architecture of EDVAC. However, it was not a necessary design decision. The telescoping of processes was a common design decision in older computing devices that used slower components. Von Neumann was well aware that many of EDVAC’s ancestors employed various degrees of parallel processing.

    In all existing devices where the element is not a vacuum tube the reaction time of the element is sufficiently long to make a certain telescoping of the steps involved in addition, subtraction, and still more in multiplication and division, desirable. (von Neumann, 1993, p. 6)

    For example, the Zuse computers performed arithmetic operations in parallel, with one component manipulating the exponent and another manipulating the mantissa of a represented number (Zuse, 1993). Aiken’s Mark II computer at Harvard also had multiple arithmetic units that could be activated in parallel, though this was not common practice because coordination of its parallel operations were difficult to control (Williams, 1997). ENIAC used 20 accumulators as mathematical operators, and these could be run simultaneously; it was a parallel machine (Neukom, 2006).

    In spite of von Neumann’s (1993) championing of serial processing, advances in computer memory permitted him to adopt a partially parallel architecture in the machine he later developed at Princeton (Burks, Goldstine, & Von Neumann, 1989). Cathode ray tube memories (Williams & Kilburn, 1949) allowed all of the bits of a word in memory to be accessed in parallel, though operations on this retrieved information were still conducted in serial.

    To get a word from the memory in this scheme requires, then, one switching mechanism to which all 40 tubes are connected in parallel. Such a switching scheme seems to us to be simpler than the technique needed in the serial system and is, of course, 40 times faster. We accordingly adopt the parallel procedure and thus are led to consider a so-called parallel machine, as contrasted with the serial principles being considered for the EDVAC. (Burks, Goldstine & von Neumann, 1989, p. 44)

    Interestingly, the extreme serial design in EDVAC resurfaced in the pocket calculators of the 1970s, permitting them to be simple and small (Ceruzzi, 1997).

    The brief historical review provided above indicates that while some of the early computing devices were serial processors, many others relied upon a certain degree of parallel processing. The same is true of some prototypical architectures proposed by classical cognitive science. For example, production systems (Newell, 1973, 1990; Newell & Simon, 1972) are serial in the sense that only one production manipulates working memory at a time. However, all of the productions in such a system scan the working memory in parallel when determining whether the condition that launches their action is present.

    An alternative approach to making the case that the serial processing is not a mark of the classical is to note that serial processing also appears in non-classical architectures. The serial versus parallel distinction is typically argued to be one of the key differences between connectionist and classical theories. For instance, parallel processing is required to explain how the brain is capable of performing complex calculations in spite of the slowness of neurons in comparison to electronic components (Feldman & Ballard, 1982; McClelland, Rumelhart, & Hinton, 1986; von Neumann, 1958). In comparing brains to digital computers, von Neumann (1958, p. 50) noted that “the natural componentry favors automata with more, but slower, organs, while the artificial one favors the reverse arrangement of fewer, but faster organs.”

    It is certainly the case that connectionist architectures have a high degree of parallelism. For instance, all of the processing units in the same layer of a multilayered perceptron are presumed to operate simultaneously. Nevertheless, even prototypical parallel distributed processing models reveal the presence of serial processing.

    One reason that the distributed memory or the standard pattern associator requires external, central control (Dawson & Schopflocher, 1992a) is because this kind of model is not capable of simultaneous learning and recalling. This is because one of its banks of processors is used as a set of input units during learning, but is used completely differently, as output units, during recall. External control is used to determine how these units are employed and therefore determines whether the machine is learning or recalling. External control also imposes seriality in the sense that during learning input, patterns are presented in sequence, and during recall, presented cues are again presented one at a time. Dawson and Schopflocher (1992a) demonstrated how true parallel processing could be accomplished in such a network, but only after substantially elaborating the primitive components of the connectionist architecture.

    A degree of serial processing is also present in multilayered networks. First, while all processors in one layer can be described as operating in parallel, the flow of information from one layer to the next is serial. Second, the operations of an individual processor are intrinsically serial. A signal cannot be output until internal activation has been computed, and internal activation cannot be computed until the net input has been determined.

    Parallel processing is not generally proposed as a characteristic that distinguishes embodied from classical models. However, some researchers have noted the advantages of decentralized computation in behaviour-based robots (Brooks, 1999).

    Again, though, embodied theories seem to exploit a mixture of parallel and serial processing. Consider the early insect-like walking robots of Rodney Brooks (1989, 1999, 2002). Each leg in the six-legged robot Genghis is a parallel processor, in the sense that each leg operates autonomously. However, the operations of each leg can be described as a finite state automaton (see the appendix on Genghis in Brooks, 2002), which is an intrinsically serial device.

    The stigmergic control of the swarm intelligence that emerges from a collection of robots or social insects (Beni, 2005; Bonabeau & Meyer, 2001; Hinchey, Sterritt, & Rouff, 2007; Sharkey, 2006; Tarasewich & McMullen, 2002) also appears to be a mixture of parallel and serial operations. A collective operates in parallel in the sense that each member of the collective is an autonomous agent. However, the behaviour of each agent is often best characterized in serial: first the agent does one thing, and then it does another, and so on. For instance, in a swarm capable of creating a nest by blind bulldozing (Parker, Zhang, & Kube, 2003), agents operate in parallel. However, each agent moves in serial from one state (e.g., plowing, colliding, finishing) to another.

    In summary, serial processing has been stressed more in classical models, while parallel processing has received more emphasis in connectionist and embodied approaches. However, serial processing cannot be said to be a mark of the classical.

    First, serial processing in classical information processing systems was adopted as an engineering convenience, and many digital computers included a certain degree of parallel processing. Second, with careful examination serial processing can also be found mixed in with the parallel processing of connectionist networks or of collective intelligences.


    This page titled 7.5: Serial versus Parallel Processing is shared under a CC BY-NC-ND license and was authored, remixed, and/or curated by Michael R. W. Dawson (Athabasca University Press) .

    • Was this article helpful?