Skip to main content
Social Sci LibreTexts

9.5: A Cognitive Synthesis

  • Page ID
    21260
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Modern experimental psychology arose around 1860 (Fechner, 1966), and more than a century and a half later is viewed by many as still being an immature, preparadigmatic discipline (Buss, 1978; Leahey, 1992). The diversity of its schools of thought and the breadth of topics that it studies are a testament to experimental psychology’s youth as a science. “In the early stages of the development of any science different men confronting the same range of phenomena, but not usually all the same particular phenomena, describe and interpret them in different ways” (Kuhn, 1970, p. 17).

    Cognitive science was born in 1956 (Miller, 2003). Because it is about a century younger than experimental psychology, it would not be surprising to discover that cognitive science is also pre-paradigmatic. This might explain the variety of opinions about the nature of cognition, introduced earlier as the competing elements of classical, connectionist, and embodied cognitive science. “The pre-paradigm period, in particular, is regularly marked by frequent and deep debates over legitimate methods, problems, and standards of solution, though these serve rather to define schools than produce agreement” (Kuhn, 1970, pp. 47–48).

    The current state of cognitive science defines an as yet incomplete dialectic. Competition amongst classical, connectionist, and embodied cognitive science reflects existing tensions between thesis and antithesis. What is missing is a state of synthesis in which cognitive science integrates key ideas from its competing schools of thought. This integration is necessary, because it is unlikely that, for instance, a classical characterization of the pure cognitive system will provide a complete explanation of cognition (Miller, Galanter, & Pribram, 1960; Neisser, 1976; Norman, 1980).

    In the latter chapters of the current book, several lines of evidence are presented to suggest that synthesis within cognitive science is possible. First, it is extremely difficult to find marks of the classical, that is, characteristics that uniquely distinguish classical cognitive science from either the connectionist or embodied approaches. For instance, classical cognitive science was inspired by the digital computer, but a variety of digital computers incorporated processes consistent with connectionism (such as parallel processing) and with embodied cognitive science (such as external representations).

    A second line of evidence is that there is a high degree of methodological similarity between the three approaches. In particular, each school of cognitive science can be characterized as exploring four different levels of investigation: computational, algorithmic, architectural, and implementational. We see in Chapter 6 that the different approaches have disagreements about the technical details within each level. Nevertheless, all four levels are investigated by all three approaches within cognitive science. Furthermore, when different approaches are compared at each level, strong similarities can be identified. This is why, for instance, that it has been claimed that the distinction between classical and connectionist cognitive science is blurred (Dawson, 1998).

    A third line of evidence accounts for the methodological similarity amongst the different approaches: cognitive scientists from different schools of thought share many core assumptions. Though they may disagree about its technical details, all cognitive scientists view cognition as a form of information processing. For instance, each of the three schools of thought appeals to the notion of representation, while at the same time debating its nature. Are representations symbols, distributed patterns, or external artifacts? All cognitive scientists have rejected Cartesian dualism and are seeking materialist explanations of cognition.

    More generally, all three approaches in cognitive science agree that cognition involves interactions between the world and states of agents. This is why a pioneer of classical cognitive science can make the following embodied claim: “A man, viewed as a behaving system, is quite simple. The apparent complexity of his behavior over time is largely a reflection of the complexity of the environment in which he finds himself” (Simon, 1969, p. 25). However, it is again fair to say that the contributions of world, body, and mind receive different degrees of emphasis within the three approaches to cognitive science. We saw earlier that production system pioneers admitted that they emphasized internal planning and neglected perception and action (Anderson et al., 2004; Newell, 1990). Only recently have they turned to including sensing and acting in their models (Kieras & Meyer, 1997; Meyer et al., 2001; Meyer & Kieras, 1997a, 1997b, 1999; Meyer et al., 1995). Even so, they are still very reluctant to include sense-act processing—links between sensing and acting that are not mediated by internal representations—to their sense-think-act production systems (Dawson, Dupuis, & Wilson, 2010).

    A fourth line of evidence is the existence of hybrid theories, such as natural computation (Marr, 1982) or Pylyshyn’s (2003) account of visual cognition. These theories explicitly draw upon concepts from each approach to cognitive science. Hybrid theories are only possible when there is at least tacit recognition that each school of thought within cognitive science has important, co-operative contributions to make. Furthermore, the existence of such theories completely depends upon the need for such co-operation: no one school of thought provides a sufficient explanation of cognition, but each is a necessary component of such an explanation.

    It is one thing to note the possibility of a synthesis in cognitive science. It is quite another to point the way to bringing such a synthesis into being. One required component, discussed earlier in this chapter, is being open to the possible contributions of the different schools of thought, an openness demonstrated by the pragmatic and interdisciplinary natural computation theory of perception.

    A second component, which is the topic of this final section of the book, is being open to a methodological perspective that pervaded early cognitive science and its immediate ancestors, but which has become less favored in more recent times. Synthesis in cognitive science may require a return, at least in part, to the practice of synthetic psychology.

    Present-day cognitive science for the most part employs analytic, and not synthetic, methodological practices. That is, most cognitive scientists are in the business of carrying out reverse engineering (Dennett, 1998). They start with a complete, pre-existing cognitive agent. They then observe its behavior, not to mention how the behavior is affected by various experimental manipulations. The results of these observations are frequently used to create theories in the form of computer simulations (Newell & Simon, 1961). For instance, Newell and Simon (1972) collected data in the form of verbal protocols, and then used these protocols to derive working production systems. In other words, when analytic methodologies are used, the collection of data precedes the creation of a model.

    The analytic nature of most cognitive science is reflected in its primary methodology, functional analysis, a prototypical example of reverse engineering (Cummins, 1975, 1983). Functional analysis dictates a top-down decomposition from the broad and abstract (i.e., computational specification of functions) to the narrower and more concrete (i.e., architecture and implementation).

    Even the natural computation approach in vision endorsed a top-down analytic approach, moving from computational to implementational analyses instead of in the opposite direction. This was because higher-level analyses were used to guide interpretations of the lower levels.

    In order to understand why the receptive fields are as they are—why they are circularly symmetrical and why their excitatory and inhibitory regions have characteristic shapes and distributions—we have to know a little of the theory of differential operators, band-pass channels, and the mathematics of the uncertainty principle. (Marr, 1982, p. 28)

    An alternative approach is synthetic, not analytic; it is bottom-up instead of top-down; and it applies forward engineering instead of reverse engineering. This approach has been called synthetic psychology (Braitenberg, 1984). In synthetic psychology, one takes a set of primitive building blocks of interest and creates a working system from them. The behavior of this system is observed in order to determine what surprising phenomena might emerge from simple components, particularly when they are embedded in an interesting or complex environment. As a result, in synthetic psychology, models precede data, because they are the source of data.

    The forward engineering that characterizes synthetic psychology proceeds as a bottom-up construction (and later exploration) of a cognitive model. Braitenberg (1984) argued that this approach would produce simpler theories than those produced by analytic methodologies, because analytic models fail to recognize the influence of the environment, falling prey to what is known as the frame of reference problem (Pfeifer & Scheier, 1999). Also, analytic techniques have only indirect access to internal components, in contrast to the complete knowledge of such structures that is possessed by a synthetic designer.

    It is pleasurable and easy to create little machines that do certain tricks. It is also quite easy to observe the full repertoire of behavior of these machines—even if it goes beyond what we had originally planned, as it often does. But it is much more difficult to start from the outside and try to guess internal structure just from the observation of the data. (Braitenberg, 1984, p. 20)

    Although Braitenberg proposed forward engineering as a novel methodology in 1984, it had been widely practiced by cyberneticists beginning in the late 1940s. For instance, the original autonomous robots, Grey Walter’s (1950a, 1950b, 1951, 1963) Tortoises, were created to observe whether complex behavior would be supported by a small set of cybernetic principles. Ashby’s (1956, 1960) Homeostat was created to study feedback relationships between simple machines; after it was constructed, Ashby observed that this device demonstrated interesting and complicated adaptive relationships to a variety of environments. This kind of forward engineering is currently prevalent in one modern field that has inspired embodied cognitive science, behavior-based robotics (Brooks, 1999; Pfeifer & Scheier, 1999; Sharkey, 2006).

    Forward engineering is not limited to the creation of autonomous robots. It has been argued that the synthetic approach characterizes a good deal of connectionism (Dawson, 2004). The thrust of this argument is that the building blocks being used are the components of a particular connectionist architecture. These are put together into a working system whose behavior can then be explored. In the connectionist case, the synthesis of a working network involves using a training environment to modify a network by applying a general learning rule.

    Classical cognitive science is arguably the most commonly practiced form of cognitive science, and it is also far less likely to adopt synthetic methodologies. However, this does not mean that classical cognitive scientists have not usefully employed forward engineering. One prominent example is in the use of production systems to study human problem solving (Newell & Simon, 1972). Clearly the analysis of verbal protocols provided a set of potential productions to include in a model. However, this was followed by a highly synthetic phase of model development.

    This synthetic phase proceeded as follows: Newell and Simon (1972) used verbal protocols to rank the various productions available in terms of their overall usage. They then began by creating a production system model that was composed of only a single production, the one most used. The performance of this simple system was then compared to the human protocol. The next step was to create a new production system by adding the next most used production to the original model, and examining the behavior of the new two-production system. This process would continue, usually revealing better performance of the model (i.e., a better fit to human data) as the model was elaborated by adding each new production.

    Forward engineering, in all of the examples alluded to above, provides a systematic exploration of what an architecture can produce “for free.” That is, it is not used to create a model that fits a particular set of data. Instead, it is used to show how much surprising and complex behavior can be generated from a simple set of components—particularly when that architecture is embedded in an interesting environment. It is used to explore the limits of a system—how many unexpected complexities appear in its behavior? What behaviors are still beyond the system’s capability? While reverse engineering encourages the derivation of a model constrained by data, forward engineering is concerned with a much more liberating process of model design. “Only about 1 in 20 [students] ‘gets it’—that is, the idea of thinking about psychological problems by inventing mechanisms for them and then trying to see what they can and cannot do” (Minsky, 1995, personal communication).

    The liberating aspect of forward engineering is illustrated in the development of the LEGO robot AntiSLAM (Dawson, Dupuis, & Wilson, 2010). Originally, this robot was created as a sonar-based version of one of Braitenberg’s (1984) simple thought experiments, Vehicle 2. Vehicle 2 used two light sensors to control the speeds of two separate motors and generated photophobic or photophilic behavior depending upon its wiring. We replaced the light sensors with two sonar sensors, which itself was a departure from convention, because the standard view was that the two sensors would interfere with one another (Boogaarts, 2007). However, we found that the robot generated nimble behaviors and effortlessly navigated around many different kinds of obstacles at top speed. A slight tweak of the robot’s architecture caused it to follow along a wall on its right. We then realized that if the environment for the robot became a reorientation arena, then it would generate rotational error. The forward engineering of this very simple robot resulted in our discovery that it generated navigational regularities “for free.”

    The appeal of forward engineering, though, is not just the discovery of unexpected behavior. It is also appealing because it leads to the discovery of an architecture’s limits. Not only do you explore what a system can do, but you discover its failures as well. It has been argued that in the analytic tradition, failures often lead to abandoning a model (Dawson, 2004), because failures amount to an inability to fit a desired set of data. In the synthetic approach, which is not driven by data fitting, failures lead to tinkering with the architecture, usually by adding new capabilities to it (Brooks, 1999, 2002). The synthetic design of cognitive models is a prototypical instance of bricolage (Dawson, Dupuis, & Wilson, 2010; Turkle, 1995).

    For instance, while the early version of AntiSLAM (Dawson, Dupuis, & Wilson, 2010) produced rotational error, it could not process competing geometric and local cues, because it had no capability of detecting local cues. After realizing that the robot was capable of reorientation, this issue was solved by adding a light sensor to the existing architecture, so that a corner’s brightness could serve as a rudimentary feature. The robot is still inadequate, though, because it does not learn. We are currently exploring how this problem might be solved by adding a modifiable connectionist network to map relations between sensors and motors. Note that this approach requires moving beyond a pure embodied account and taking advantage of connectionist concepts.

    In my opinion, it is the limitations inevitably encountered by forward engineers that will provide incentive for a cognitive synthesis. Consider the strong anti-representational positions of radical embodied cognitive scientists (Chemero, 2009; Noë, 2004). It is certainly astonishing to see how much interesting behaviour can be generated by systems with limited internal representations. But how much of cognition can be explained in a data-driven, anti-representational manner before researchers have to appeal to representations? For instance, is a radical embodied cognitive science of natural language possible? If embodied cognitive scientists take their theories to their limits, and are then open—as are natural computation researchers—to classical or connectionist concepts, then an interesting and productive cognitive synthesis is inevitable. That some embodied researchers (Clark, 1997) have long been open to a synthesis between embodied and classical ideas is an encouraging sign.

    Similarly, radical connectionist researchers have argued that a great deal of cognition can be accomplished without the need for explicit symbols and explicit rules (Rumelhart & McClelland, 1986a; Smolensky, 1988). Classical researchers have acknowledged the incredible range of phenomena that have yielded to the fairly simple PDP architecture (Fodor & Pylyshyn, 1988). But, again, how much can connectionists explain from a pure PDP perspective, and what phenomena will elude their grasp, demanding that classical ideas be reintroduced? Might it be possible to treat networks as dynamic symbols, and then manipulate them with external rules that are different from the learning rules that are usually applied? Once again, recent ideas seem open to co-operative use of connectionist and classical ideas (Smolensky & Legendre, 2006).

    The synthetic approach provides a route that takes a cognitive scientist to the limits of their theoretical perspective. This in turn will produce a theoretical tension that will likely only be resolved when core elements of alternative perspectives are seriously considered. Note that such a resolution will require a theorist to be open to admitting different kinds of ideas. Rather than trying to show that their architecture can do everything cognitive, researchers need to find what their architectures cannot do, and then expand their theories by including elements of alternative, possibly radically different, views of cognition.

    This is not to say that the synthetic approach is the only methodology to be used. Synthetic methods have their own limitations, and a complete cognitive science requires interplay between synthesis and analysis (Dawson, 2004). In particular, cognitive science ultimately is in the business of explaining the cognition of biological agents. To do so, its models—including those developed via forward engineering—must be validated. Validating a theory requires the traditional practices of the analytic approach, seeking equivalencies between computations, algorithms, and architectures. It is hard to imagine such validation not proceeding by adopting analytic methods that provide relative complexity, error, and intermediate state evidence. It is also hard to imagine that a complete exploration of a putative cognitive architecture will not exploit analytic evidence from the neurosciences.

    Indeed, it may be that the inability to use analytic evidence to validate a “pure” model from one school of thought may be the primary motivation to consider alternative perspectives, fueling a true synthesis within cognitive science. According to Kuhn (1970), paradigms are born by discovering anomalies. The analytic techniques of cognitive science are well equipped to discover such problems. What is then required for synthesis is a willingness amongst cognitive scientists to admit that competing views of cognition might be able to be co-operatively applied in order to resolve anomalies.


    This page titled 9.5: A Cognitive Synthesis is shared under a CC BY-NC-ND license and was authored, remixed, and/or curated by Michael R. W. Dawson (Athabasca University Press) .

    • Was this article helpful?