Skip to main content
Social Sci LibreTexts

4.19: What Is Connectionist Cognitive Science?

  • Page ID
    35747
  • The purpose of the current chapter was to introduce the elements of connectionist cognitive science, the “flavour” of cognitive science that was seen first as Old Connectionism in the 1940s (McCulloch & Pitts, 1943) and which peaked by the late 1950s (Rosenblatt, 1958, 1962; Widrow, 1962; Widrow & Hoff, 1960). Criticisms concerning the limitations of such networks (Minsky & Papert, 1969) caused connectionist research to almost completely disappear until the mid-1980s (Papert, 1988), when New Connectionism arose in the form of techniques capable of training powerful multilayered networks (McClelland & Rumelhart, 1986; Rumelhart & McClelland, 1986c).

    Connectionism is now well established as part of mainstream cognitive science, although its relationship to classical cognitive science is far from clear. Artificial neural networks have been used to model a dizzying variety of phenomena including animal learning (Enquist & Ghirlanda, 2005; Schmajuk, 1997), cognitive development (Elman et al., 1996), expert systems (Gallant, 1993), language (Mammone, 1993; Sharkey, 1992), pattern recognition and perception (Pao, 1989; Ripley, 1996; Wechsler, 1992), and musical cognition (Griffith & Todd, 1999; Todd & Loy, 1991).

    Given the breadth of connectionist cognitive science, only a selection of its elements have been introduced in this chapter; capturing all of the important contributions of connectionism in a single chapter is not possible. A proper treatment of connectionism requires a great deal of further reading; fortunately connectionism is described in a rich and growing literature (Amit, 1989; Anderson, 1995; Anderson & Rosenfeld, 1998; Bechtel & Abrahamsen, 2002; Carpenter & Grossberg, 1992; Caudill & Butler, 1992a, 1992b; Churchland, 1986; Churchland & Sejnowski, 1992; Clark, 1989, 1993; Dawson, 2004, 2005; Grossberg, 1988; Horgan & Tienson, 1996; Quinlan, 1991; Ramsey, Stich, & Rumelhart, 1991; Ripley, 1996; Rojas, 1996).

    Connectionist cognitive science is frequently described as a reaction against the foundational assumptions of classical cognitive science. The roots of classical cognitive science draw inspiration from the rationalist philosophy of Descartes, with an emphasis on nativism and logicism (Chomsky, 1966; Devlin, 1996). In contrast, the foundations of connectionist cognitive science are the empiricist philosophy of Locke and the associationist psychology that can be traced from the early British empiricists to the more modern American behaviourists. Connectionist networks acquire structure or knowledge via experience; they often begin as blank slates (Pinker, 2002) and acquire structure as they learn about their environments (Bechtel, 1985; Clark, 1989, 1993; Hillis, 1988).

    Classical cognitive science departed from Cartesian philosophy by seeking materialist accounts of mentality. This view was inspired by the digital computer and the fact that electronic switches could be assigned abstract logical interpretations (Shannon, 1938).

    Connectionism is materialist as well, but arguably in a more restricted sense than classical cognitive science. The classical approach appeals to the multiple realization argument when it notes that under the proper interpretation, almost any physical substrate could instantiate information processing or symbol manipulation (Hillis, 1998). In contrast, connectionism views the digital computer metaphor as mistaken. Connectionists claim that the operations of such a device—regardless of its material nature—are too slow, brittle, and inflexible to be appropriate for modelling cognition. Connectionism posits instead that the brain is the only appropriate material for realizing the mind and researchers attempt to frame its theories in terms of information processing that is biologically plausible or neuronally inspired (Amit, 1989; Burnod, 1990; Gluck & Myers, 2001).

    In adopting the digital computer metaphor and the accompanying logictic view that cognition is the result of rule-governed symbol manipulation, classical cognitive science is characterized by a marked structure/process distinction. That is, classical models—typified by Turing machines (Turing, 1936) or production systems (Newell & Simon, 1972)—distinguish between the symbols being manipulated and the explicit rules doing the manipulating. This distinction is usually marked in models by having separate locations for structure and process, such as a memory that holds symbols and a central controller that holds the processes.

    In abandoning the digital computer metaphor and adopting a notion of information processing that is biologically inspired, connectionist cognitive science abandons or blurs the structure/process distinction. Neural networks can be viewed as both structure and process; they have been called active data structures (Hillis, 1985). This has led to an extensive debate about whether theories of cognition require explicit rules (Ramsey, Stich, & Rumelhart, 1991).

    The digital computer metaphor adopted by classical cognitive science leads it to also adopt a particular notion of control. In particular, classical models invoke a notion of serial control in which representations can only be manipulated one rule at a time. When classical problem solvers search a problem space in order to solve a problem (Newell & Simon, 1972), they do so to discover a sequence of operations to perform.

    In contrast, when connectionist cognitive science abandons the digital computer metaphor, it abandons with it the assumption of centralized serial control. It does so because it views this as a fatal flaw in classical models, generating a “von Neumann bottleneck” that makes classical theories too slow to be useful in real time (Feldman & Ballard, 1982; Hillis, 1985). In the stead of centralized serial control, connectionists propose decentralized control in which many simple processes can be operating in parallel (see Dawson & Schopflocher, 1992a).

    Clearly, from one perspective, there are obvious and important differences between connectionist and classical cognitive science. However, a shift in perspective can reveal a view in which striking similarities between these two approaches are evident. We saw earlier that classical cognitive science is performed at multiple levels of analysis, using formal methods to explore the computational level, behavioural methods to investigate the algorithmic level, and a variety of behavioural and biological techniques to elaborate the architectural and implementational levels. It is when connectionist cognitive science is examined from this same multiplelevels viewpoint that its relationship to classical cognitive science is made apparent (Dawson, 1998).

    Analyses at the computational level involve using some formal language to make proofs about cognitive systems. Usually these proofs concern statements about what kind of computation is being performed or what the general capabilities of a system are. Computational-level analyses have had a long and important history in connectionist cognitive science, and they have been responsible, for example, for proofs that particular learning rules will converge to desired least-energy or lowerror states (Ackley, Hinton, & Sejnowski, 1985; Hopfield, 1982; Rosenblatt, 1962; Rumelhart, Hinton, & Williams, 1986b). Other examples of computational analyses were provided earlier in this chapter, in the discussion of carving pattern spaces into decision regions and the determination that output unit activities could be interpreted as being conditional probabilities.

    That computational analysis is possible for both connectionist and classical cognitive science highlights one similarity between these two approaches. The results of some computational analyses, though, reveal a more striking similarity. One debate in the literature has concerned whether the associationist nature of artificial neural networks limits their computational power, to the extent that they are not appropriate for cognitive science. For instance, there has been considerable debate about whether PDP networks demonstrate appropriate systematicity and componentiality (Fodor & McLaughlin, 1990; Fodor & Pylyshyn, 1988; Hadley, 1994a, 1994b, 1997; Hadley & Hayward, 1997), two characteristics important for the use of recursion in classical models. However, beginning with the mathematical analyses of Warren McCulloch (McCulloch & Pitts, 1943) and continuing with modern computational analyses (Girosi & Poggio, 1990; Hartman, Keeler, & Kowalski, 1989; Lippmann, 1989; McCulloch & Pitts, 1943; Moody & Darken, 1989; Poggio & Girosi, 1990; Renals, 1989; Siegelmann, 1999; Siegelmann & Sontag, 1991), we have seen that artificial neural networks belong to the class of universal machines. Classical and connectionist cognitive science are not distinguishable at the computational level of analysis (Dawson, 1998, 2009).

    Let us now turn to the next level of analysis, the algorithmic level. For classical cognitive science, the algorithmic level involves detailing the specific information processing steps that are involved in solving a problem. In general, this almost always involves analyzing behaving systems in order to determine how representations are being manipulated, an approach typified by examining human problem solving with the use of protocol analysis (Ericsson & Simon, 1984; Newell & Simon, 1972). Algorithmic-level analyses for connectionists also involve analyzing the internal structure of intact systems—trained networks—in order to determine how they mediate stimulus-response regularities. We have seen examples of a variety of techniques that can and have been used to uncover the representations that are hidden within network structures, and which permit networks to perform desired input-output mappings. Some of these representations, such as coarse codes, look like alternatives to classical representations. Thus one of classical cognitive science’s contributions may be to permit new kinds of representations to be discovered and explored.

    Nevertheless, algorithmic-level analyses also reveal further similarities between connectionist and classical cognitive science. While these two approaches may propose different kinds of representations, they still are both representational. There is no principled difference between the classical sandwich and the connectionist sandwich (Calvo & Gomila, 2008). Furthermore, it is not even guaranteed that the contents of these two types of sandwiches will differ. One can peer inside an artificial neural network and find classical rules for logic (Berkeley et al., 1995) or even an entire production system (Dawson et al., 2000).

    At the architectural level of analysis, stronger differences between connectionist and classical cognitive science can be established. Indeed, the debate between these two approaches is in essence a debate about architecture. This is because many of the dichotomies introduced earlier—rationalism vs. empiricism, digital computer vs. analog brain, structure/process vs. dynamic data, serialism vs. parallelism—are differences in opinion about cognitive architecture.

    In spite of these differences, and in spite of connectionism’s search for biologically plausible information processing, there is a key similarity at the architectural level between connectionist and classical cognitive science: at this level, both propose architectures that are functional, not physical. The connectionist architecture consists of a set of building blocks: units and their activation functions, modifiable connections, learning rules. But these building blocks are functional accounts of the information processing properties of neurons; other brain-like properties are ignored. Consider one response (Churchland & Churchland, 1990) to the claim that the mind is the product of the causal powers of the brain (Searle, 1990):

    We presume that Searle is not claiming that a successful artificial mind must have all the causal powers of the brain, such as the power to smell bad when rotting, to harbor slow viruses such as kuru, to stain yellow with horseradish peroxidase and so forth. Requiring perfect parity would be like requiring that an artificial flying device lay eggs. (Churchland & Churchland, 1990, p. 37)

    It is the functional nature of the connectionist architecture that enables it to be almost always studied by simulating it—on a digital computer!

    The functional nature of the connectionist architecture raises some complications when the implementational level of analysis is considered. On the one hand, many researchers view connectionism as providing implementational-level theories of cognitive phenomena. At this level, one finds researchers exploring relationships between biological receptive fields and patterns of connectivity and similar properties of artificial networks (Ballard, 1986; Bankes & Margoliash, 1993; Bowers, 2009; Guzik, Eaton, & Mathis, 1999; Keith, Blohm, & Crawford, 2010; Moorhead, Haig, & Clement, 1989; Poggio, Torre, & Koch, 1985; Zipser & Andersen, 1988). One also encounters researchers finding biological mechanisms that map onto architectural properties such as learning rules. For example, there is a great deal of interest in relating the actions of certain neurotransmitters to Hebb learning (Brown, 1990; Gerstner & Kistler, 2002; van Hemmen & Senn, 2002). Similarly, it has been argued that connectionist networks provide an implementational account of associative learning (Shanks, 1995), a position that ignores its potential contributions at other levels of analysis (Dawson, 2008).

    On the other hand, the functional nature of the connectionist architecture has resulted in its biological status being questioned or challenged. There are many important differences between biological and artificial neural networks (Crick & Asanuma, 1986; Douglas & Martin, 1991; McCloskey, 1991). There is very little biological evidence in support of important connectionist learning rules such as backpropagation of error (Mazzoni, Andersen, & Jordan, 1991; O’Reilly, 1996; Shimansky, 2009). Douglas and Martin (1991, p. 292) dismissed artificial neural networks as merely being “stick and ball models.” Thus whether connectionist cognitive science is a biologically plausible alternative to classical cognitive science remains an open issue.

    That connectionist cognitive science has established itself as a reaction against classical cognitive science cannot be denied. However, as we have seen in this section, it is not completely clear that connectionism represents a radical alternative to the classical approach (Schneider, 1987), or that it is rather much more closely related to classical cognitive science than a brief glance at some of the literature might suggest (Dawson, 1998). It is certainly the case that connectionist cognitive science has provided important criticisms of the classical approach and has therefore been an important contributor to theory of mind.

    Interestingly, many of the criticisms that have been highlighted by connectionist cognitive science—slowness, brittleness, biological implausibility, overemphasis of logicism and disembodiment—have been echoed by a third school, embodied cognitive science. Furthermore, related criticisms have been applied by embodied cognitive scientists against connectionist cognitive science. Not surprisingly, then, embodied cognitive science has generated a very different approach to deal with these issues than has connectionist cognitive science.

    In Chapter 5 we turn to the elements of this third “flavour” of cognitive science. As has been noted in this final section of Chapter 4, there appears to be ample room for finding relationships between connectionism and classicism such that the umbrella cognitive science can be aptly applied to both. We see that embodied cognitive science poses some interesting and radical challenges, and that its existence calls many of the core features shared by connectionism and classicism into question.

    • Was this article helpful?