1.4: Cognitive Science or the Cognitive Sciences?
- Page ID
One reason that Wilhelm Wundt is seen as the founder of psychology is because he established its first academic foothold at the University of Leipzig. Wundt created the first experimental psychology laboratory there in 1879. Psychology was officially part of the university calendar by 1885. Today, hundreds of psychology departments exist at universities around the world.
Psychology is clearly healthy as an academic discipline. However, its status as a science is less clear. Sigmund Koch, a noted critic of psychology (Koch, 1959, 1969, 1976, 1981, 1993), argued in favor of replacing the term psychology with the psychological studies because of his view that it was impossible for psychology to exist as a coherent discipline.
Although it is much younger than psychology, cognitive science has certainly matured into a viable academic discipline. In the fall of 2010, the website for the Cognitive Science Society listed 77 universities around the world that offered cognitive science as a program of study. Recent developments in cognitive science, though, have raised questions about its scientific coherence. To parallel Koch, should we examine “cognitive science,” or is it more appropriate to inquire about “the cognitive sciences”? Investigating this issue is one theme of the current book.
According to psychologist George Miller (2003), cognitive science was born on September 11, 1956. At this early stage, the unity of cognitive science was not really an issue. Digital computers were a relatively recent invention (Goldstine, 1993; Lavington, 1980; Williams, 1997; Zuse, 1993). At the time, they presented a unified notion of information processing to be adopted by cognitive science. Digital computers were automatic symbol manipulators (Haugeland, 1985): they were machines that manipulated symbolic representations by applying well-defined rules; they brought symbolic logic to mechanized life. Even though some researchers had already noted that the brain may not work exactly like a computer, the brain was still assumed to be digital, because the all-or-none generation of an action potential was interpreted as being equivalent to assigning a truth value in a Boolean logic (McCulloch & Pitts, 1943; von Neumann, 1958).
Classical cognitive science, which is the topic of Chapter 3, was the first school of thought in cognitive science and continues to dominate the field to this day. It exploited the technology of the day by interpreting “information processing” as meaning “rule-governed manipulation of symbol” (Feigenbaum & Feldman, 1995). This version of the information processing hypothesis bore early fruit, producing major advances in the understanding of language (Chomsky, 1957, 1959b, 1965) and of human problem solving (Newell, Shaw, & Simon, 1958; Newell & Simon, 1961, 1972). Later successes with this approach led to the proliferation of “thinking artifacts”: computer programs called expert systems (Feigenbaum & McCorduck, 1983; Kurzweil, 1990). Some researchers have claimed that the classical approach is capable of providing a unified theory of thought (Anderson, 1983; Anderson et al., 2004; Newell, 1990).
The successes of the classical approach were in the realm of well-posed problems, such problems being those with unambiguously defined states of knowledge and goal states, not to mention explicitly defined operations for converting one state of knowledge into another. If a problem is well posed, then its solution can be described as a search through a problem space, and a computer can be programmed to perform this search (Newell & Simon, 1972). However, this emphasis led to growing criticisms of the classical approach. One general issue was whether human cognition went far beyond what could be captured just in terms of solving well-posed problems (Dreyfus, 1992; Searle, 1980; Weizenbaum, 1976).
Indeed, the classical approach was adept at producing computer simulations of game playing and problem solving, but was not achieving tremendous success in such fields as speech recognition, language translation, or computer vision. “An overall pattern had begun to take shape. . . . an early, dramatic success based on the easy performance of simple tasks, or low-quality work on complex tasks, and then diminishing returns, disenchantment, and, in some cases, pessimism” (Dreyfus, 1992, p. 99).
Many abilities that humans are expert at without training, such as speaking, seeing, and walking, seemed to be beyond the grasp of classical cognitive science. These abilities involve dealing with ill-posed problems. An ill-posed problem is deeply ambiguous, has poorly defined knowledge states and goal states, and involves poorly defined operations for manipulating knowledge. As a result, it is not well suited to classical analysis, because a problem space cannot be defined for an ill-posed problem. This suggests that the digital computer provides a poor definition of the kind of information processing performed by humans. “In our view people are smarter than today’s computers because the brain employs a basic computational architecture that is more suited to deal with a central aspect of the natural information processing tasks that people are so good at” (Rumelhart & McClelland, 1986c, p. 3).
Connectionist cognitive science reacted against classical cognitive science by proposing a cognitive architecture that is qualitatively different from that inspired by the digital computer metaphor (Bechtel & Abrahamsen, 2002; Churchland, Koch, & Sejnowski, 1990; Churchland & Sejnowski, 1992; Clark, 1989, 1993; Horgan & Tienson, 1996; Quinlan, 1991). Connectionists argued that the problem with the classical notion of information processing was that it ignored the fundamental properties of the brain. Connectionism cast itself as a neuronally inspired, biologically plausible alternative to classical cognitive science (Bechtel & Abrahamsen, 2002; McClelland & Rumelhart, 1986; Rumelhart & McClelland, 1986c). “No serious study of mind (including philosophical ones) can, I believe, be conducted in the kind of biological vacuum to which [classical] cognitive scientists have become accustomed” (Clark, 1989, p. 61).
The architecture proposed by connectionism was the artificial neural network (Caudill & Butler, 1992a, 1992b; Dawson, 2004, 2005; De Wilde, 1997; Muller & Reinhardt, 1990; Rojas, 1996). An artificial neural network is a system of simple processors, analogous to neurons, which operate in parallel and send signals to one another via weighted connections that are analogous to synapses. Signals detected by input processors are converted into a response that is represented as activity in a set of output processors. Connection weights determine the input-output relationship mediated by a network, but they are not programmed. Instead, a learning rule is used to modify the weights. Artificial neural networks learn from example.
Artificial neural networks negate many of the fundamental properties of the digital computer (von Neumann, 1958). Gone was the notion that the brain was a digital symbol manipulator governed by a serial central controller. In its place, the processes of the brain were described as subsymbolic and parallel (Smolensky, 1988); control of these processes was decentralized. Gone was the classical distinction between structure and process, in which a distinct set of explicit rules manipulated discrete symbols stored in a separate memory. In its place, the brain was viewed as a distributed system in which problem solutions emerged from the parallel activity of a large number of simple processors: a network was both structure and process, and networks both stored and modified information at the same time (Hillis, 1985). Gone was the assumption that information processing was akin to doing logic (Oaksford & Chater, 1991). In its place, connectionists viewed the brain as a dynamic, statistical pattern recognizer (Churchland & Sejnowski, 1989; Grossberg, 1980; Smolensky, 1988).
With all such changes, though, connectionism still concerned itself with cognition as information processing—but of a different kind: “These dissimilarities do not imply that brains are not computers, but only that brains are not serial digital computers” (Churchland, Koch, & Sejnowski, 1990, p. 48, italics original).
Connectionist models of cognition have had as long a history as have classical simulations (Dawson, 2004; Medler, 1998). McCulloch and Pitts described powerful neural network models in the 1940s (McCulloch, 1988a), and Rosenblatt’s (1958, 1962) perceptrons were simple artificial neural networks that were not programmed, but instead learned from example. Such research waned in the late 1960s as the result of proofs about the limitations of simple artificial neural networks (Minsky & Papert, 1988; Papert, 1988).
However, the limitations of early networks were overcome in the mid-1980s, by which time new techniques had been discovered that permitted much more powerful networks to learn from examples (Ackley, Hinton, & Sejnowski, 1985; Rumelhart, Hinton, & Williams, 1986b). Because of these new techniques, modern connectionism has achieved nearly equal status to classical cognitive science. Artificial neural networks have been used to model a wide range of ill-posed problems, have generated many expert systems, and have successfully simulated domains once thought to be exclusive to the classical approach (Bechtel & Abrahamsen, 2002; Carpenter & Grossberg, 1992; Enquist & Ghirlanda, 2005; Gallant, 1993; Gluck & Myers, 2001; Grossberg, 1988; Kasabov, 1996; Pao, 1989; Ripley, 1996; Schmajuk, 1997; Wechsler, 1992).
In a review of a book on neural networks, Hanson and Olson (1991, p. 332) claimed that “the neural network revolution has happened. We are living in the aftermath.” This revolution, as is the case with most, has been messy and acrimonious, markedly departing from the sense of unity that cognitive science conveyed at the time of its birth. A serious and angry debate about the merits of classical versus connectionist cognitive science rages in the literature.
On the one hand, classical cognitive scientists view the rise of connectionism as being a rebirth of the associationist and behaviourist psychologies that cognitivism had successfully replaced. Because connectionism eschewed rules and symbols, classicists argued that it was not powerful enough to account for the regularities of thought and language (Fodor & McLaughlin, 1990; Fodor & Pylyshyn, 1988; Pinker, 2002; Pinker & Prince, 1988). “The problem with connectionist models is that all the reasons for thinking that they might be true are reasons for thinking that they couldn’t be psychology” (Fodor & Pylyshyn, 1988, p. 66). A Scientific American news story on a connectionist expert system included Pylyshyn’s comparison of connectionism to voodoo: “‘People are fascinated by the prospect of getting intelligence by mysterious Frankenstein-like means—by voodoo! And there have been few attempts to do this as successful as neural nets” (Stix, 1994, p. 44). The difficulty with interpreting the internal structure of connectionist networks has been used to argue against their ability to provide models, theories, or even demonstrations to cognitive science (McCloskey, 1991).
On the other hand, and not surprisingly, connectionist researchers have replied in kind. Some of these responses have been arguments about problems that are intrinsic to the classical architecture (e.g., slow, brittle models) combined with claims that the connectionist architecture offers solutions to these problems (Feldman & Ballard, 1982; Rumelhart & McClelland, 1986c). Others have argued that classical models have failed to provide an adequate account of experimental studies of human cognition (Oaksford, Chater, & Stenning, 1990). Connectionist practitioners have gone as far as to claim that they have provided a paradigm shift for cognitive science (Schneider, 1987).
Accompanying claims for a paradigm shift is the view that connectionist cognitive science is in a position to replace an old, tired, and failed classical approach. Searle (1992, p. 247), in a defense of connectionism, has described traditional cognitivist models as being “obviously false or incoherent.” Some would claim that classical cognitive science doesn’t study the right phenomena. “The idea that human activity is determined by rules is not very plausible when one considers that most of what we do is not naturally thought of as problem solving” (Horgan & Tienson, 1996, p. 31). Paul Churchland noted that “good old-fashioned artificial intelligence was a failure. The contribution of standard architectures and standard programming artificial intelligence was a disappointment” (Baumgartner & Payr, 1995, p. 36). Churchland went on to argue that this disappointment will be reversed with the adoption of more brain-like architectures.
Clearly, the rise of connectionism represents a fragmentation of cognitive science. This fragmentation is heightened by the fact that connectionists themselves freely admit that there are different notions about information processing that fall under the connectionist umbrella (Horgan & Tienson, 1996; Rumelhart & McClelland, 1986c). “It is not clear that anything has appeared that could be called a, let alone the, connectionist conception of cognition” (Horgan & Tienson, 1996, p. 3).
If the only division within cognitive science was between classical and connectionist schools of thought, then the possibility of a unified cognitive science still exists. Some researchers have attempted to show that these two approaches can be related (Dawson, 1998; Smolensky & Legendre, 2006), in spite of the differences that have been alluded to in the preceding paragraphs. However, the hope for a unified cognitive science is further challenged by the realization that a third school of thought has emerged that represents a reaction to both classical and connectionist cognitive science.
This third school of thought is embodied cognitive science (Chemero, 2009; Clancey, 1997; Clark, 1997; Dawson, Dupuis, & Wilson, 2010; Robbins & Aydede, 2009; Shapiro, 2011). Connectionist cognitive science arose because it felt that classical cognitive science did not pay sufficient attention to a particular part of the body, the brain. Embodied cognitive science critiques both classical and connectionist approaches because both ignore the whole body and its interaction with the world. Radical versions of embodied cognitive science aim to dispense with mental representations completely, and argue that the mind extends outside the brain, into the body and the world (Agre, 1997; Chemero, 2009; Clancey, 1997; Clark, 2008; Clark & Chalmers, 1998; Noë, 2009; Varela, Thompson, & Rosch, 1991; Wilson, 2004).
A key characteristic of embodied cognitive science is that it abandons methodological solipsism (Wilson, 2004). According to methodological solipsism (Fodor, 1980), representational states are individuated only in terms of their relations to other representational states. Relations of the states to the external world—the agent’s environment—are not considered. “Methodological solipsism in psychology is the view that psychological states should be construed without reference to anything beyond the boundary of the individual who has those states” (Wilson, 2004, p. 77).
Methodological solipsism is reflected in the sense-think-act cycle that characterizes both classical and connectionist cognitive science (Pfeifer & Scheier, 1999). The sense-think-act cycle defines what is also known as the classical sandwich (Hurley, 2001), in which there is no direct contact between sensing and acting. Instead, thinking—or representations—is the “filling” of the sandwich, with the primary task of planning action on the basis of sensed data. Both classical and connectionist cognitive science adopt the sense-think-act cycle because both have representations standing between perceptual inputs and behavioural outputs. “Representation is an activity that individuals perform in extracting and deploying information that is used in their further actions” (Wilson, 2004, p. 183).
Embodied cognitive science replaces the sense-think-act cycle with sense-act processing (Brooks, 1991, 1999; Clark, 1997, 1999, 2003; Hutchins, 1995; Pfeifer & Scheier, 1999). According to this alternative view, there are direct links between sensing and acting. The purpose of the mind is not to plan action, but is instead to coordinate sense-act relations. “Models of the world simply get in the way. It turns out to be better to use the world as its own model” (Brooks, 1991, p. 139). Embodied cognitive science views the brain as a controller, not as a planner. “The realization was that the so-called central systems of intelligence—or core AI as it has been referred to more recently—was perhaps an unnecessary illusion, and that all the power of intelligence arose from the coupling of perception and actuation systems” (Brooks, 1999, p. viii).
In replacing the sense-think-act cycle with the sense-act cycle, embodied cognitive science distances itself from classical and connectionist cognitive science. This is because sense-act processing abandons planning in particular and the use of representations in general. Brooks (1999, p. 170) wrote: “In particular I have advocated situatedness, embodiment, and highly reactive architectures with no reasoning systems, no manipulable representations, no symbols, and totally decentralized computation.” Other theorists make stronger versions of this claim: “I hereby define radical embodied cognitive science as the scientific study of perception, cognition, and action as necessarily embodied phenomena, using explanatory tools that do not posit mental representations” (Chemero, 2009, p. 29).
The focus on sense-act processing leads directly to the importance of embodiment. Embodied cognitive science borrows a key idea from cybernetics: that agents are adaptively linked to their environment (Ashby, 1956; Wiener, 1948). This adaptive link is a source of feedback: an animal’s actions on the world can change the world, which in turn will affect later actions. Embodied cognitive science also leans heavily on Gibson’s (1966, 1979) theory of direct perception. In particular, the adaptive link between an animal and its world is affected by the physical form of the animal—its embodiment. “It is often neglected that the words animal and environment make an inseparable pair” (Gibson, 1979, p. 8). Gibson proposed that sensing agents “picked up” properties that indicated potential actions that could be taken on the world. Again, the definition of such affordances requires taking the agent’s form into account.
Embodied cognitive science also distances itself from both classical and connectionist cognitive science by proposing the extended mind hypothesis (Clark, 1997, 1999, 2003, 2008; Wilson, 2004, 2005). According to the extended mind hypothesis, the mind is not separated from the world by the skull. Instead, the boundary between the mind and the world is blurred, or has disappeared. A consequence of the extended mind is cognitive scaffolding, where the abilities of “classical” cognition are enhanced by using the external world as support. A simple example of this is extending memory by using external aids, such as notepads. However, full-blown information processing can be placed into the world if appropriate artifacts are used. Hutchins (1995) provided many examples of navigational tools that externalize computation. “It seems that much of the computation was done by the tool, or by its designer. The person somehow could succeed by doing less because the tool did more” (p. 151).
Embodied cognitive science provides another fault line in a fragmenting cognitive science. With notions like the extended mind, the emphasis on action, and the abandonment of representation, it is not clear at first glance whether embodied cognitive science is redefining the notion of information processing or abandoning it altogether. “By failing to understand the source of the computational power in our interactions with simple ‘unintelligent’ physical devices, we position ourselves well to squander opportunities with so-called intelligent computers” (Hutchins, 1995, p. 171).
Further fragmentation is found within the embodied cognition camp (Robbins & Aydede, 2009; Shapiro, 2011). Embodied cognitive scientists have strong disagreements amongst themselves about the degree to which each of their radical views is to be accepted. For instance, Clark (1997) believed there is room for representation in embodied cognitive science, while Chemero (2009) did not.
In summary, early developments in computer science led to a unitary notion of information processing. When information processing was adopted as a hypothesis about cognition in the 1950s, the result was a unified cognitive science. However, a half century of developments in cognitive science has led to a growing fragmentation of the field. Disagreements about the nature of representations, and even about their necessity, have spawned three strong camps within cognitive science: classical, connectionist, and embodied. Fragmentation within each of these camps can easily be found. Given this situation, it might seem foolish to ask whether there exist any central ideas that can be used to unify cognitive science. However, the asking of that question is an important thread that runs through the current book.