Theories that incorporate stigmergy demonstrate the plausibility of removing central cognitive control; perhaps embodied cognitive science could replace the classical sandwich’s sense-think-act cycle with sense-act reflexes.
The realization was that the so-called central systems of intelligence—or core AI as it has been referred to more recently—was perhaps an unnecessary illusion, and that all the power of intelligence arose from the coupling of perception and actuation systems. (Brooks, 1999, p. viii)
For a stigmergic theory to have any power at all, agents must exhibit two critical abilities. First, they must be able to sense their world. Second, they must be able to physically act upon the world. For instance, stigmergic control of nest construction would be impossible if wasps could neither sense local attributes of nest structure nor act upon the nest to change its appearance.
In embodied cognitive science, an agent’s ability to sense its world is called situatedness. For the time being, we will simply equate situatedness with the ability to sense. However, situatedness is more complicated than this, because it depends critically upon the physical nature of an agent, including its sensory apparatus and its bodily structure. These issues will be considered in more detail in the next section.
In embodied cognitive science, an agent’s ability to act upon and alter its world depends upon its embodiment. In the most general sense, to say that an agent is embodied is to say that it is an artifact, that it has physical existence. Thus while neither a thought experiment (Braitenberg, 1984) nor a computer simulation (Wilhelms & Skinner, 1990) for exploring a Braitenberg vehicle are embodied, a physical robot that acts like a Braitenberg vehicle (Dawson, Dupuis, & Wilson, 2010) is embodied. The physical structure of the robot itself is important in the sense that it is a source of behavioral complexity. Computer simulations of Braitenberg vehicles are idealizations in which all motors and sensors work perfectly. This is impossible in a physically realized robot. In an embodied agent, one motor will be less powerful than another, or one sensor may be less effective than another. Such differences will alter robot behavior. These imperfections are another important source of behavioral complexity, but are absent when such vehicles are created in simulated and idealized worlds.
However, embodiment is more complicated than mere physical existence. Physically existing agents can be embodied to different degrees (Fong, Nourbakh sh, & Dautenhahn, 2003). This is because some definitions of embodiment relate to the extent to which an agent can alter its environment. For instance, Fong, Nourbakhsh, & Dautenhahn (2003, p. 149) argued that “embodiment is grounded in the relationship between a system and its environment. The more a robot can perturb an environment, and be perturbed by it, the more it is embodied.” As a result, not all robots are equally embodied (Dawson, Dupuis, & Wilson, 2010). A robot that is more strongly embodied than another is a robot that is more capable of affecting, and being affected by, its environment.
The power of embodied cognitive science emerges from agents that are both situated and embodied. This is because these two characteristics provide a critical source of nonlinearity called feedback (Ashby, 1956; Wiener, 1948). Feedback occurs when information about an action’s effect on the world is used to inform the progress of that action. As Ashby (1956, p. 53) noted, “‘feedback’ exists between two parts when each affects the other,” when “circularity of action exists between the parts of a dynamic system.”
Wiener (1948) realized that feedback was central to a core of problems involving communication, control, and statistical mechanics, and that it was crucial to both biological agents and artificial systems. He provided a mathematical framework for studying communication and control, defining the discipline that he called cybernetics. The term cybernetics was derived from the Greek word for “steersman” or “governor.” “In choosing this term, we wish to recognize that the first significant paper on feedback mechanisms is an article on governors, which was published by Clerk Maxwell in 1868” (Wiener, 1948, p. 11). Interestingly, engine governors make frequent appearances in formal discussions of the embodied approach (Clark, 1997; Port & van Gelder, 1995b; Shapiro, 2011).
The problem with the nonlinearity produced by feedback is that it makes computational analyses extraordinarily difficult. This is because the mathematics of feedback relationships between even small numbers of components is essentially intractable. For instance, Ashby (1956) realized that feedback amongst a machine that only consisted of four simple components could not analyzed:
When there are only two parts joined so that each affects the other, the properties of the feedback give important and useful information about the properties of the whole. But when the parts rise to even as few as four, if everyone affects the other three, then twenty circuits can be traced through them; and knowing the properties of all the twenty circuits does not give complete information about the system. (Ashby, 1956, p. 54)
For this reason, embodied cognitive science is often practised using forward engineering, which is a kind of synthetic methodology (Braitenberg, 1984; Dawson, 2004; Pfeifer & Scheier, 1999). That is, researchers do not take a complete agent and reverse engineer it into its components. Instead, they take a small number of simple components, compose them into an intact system, set the components in motion in an environment of interest, and observe the resulting behaviors.
For instance, Ashby (1960) investigated the complexities of his four-component machine not by dealing with intractable mathematics, but by building and observing a working device, the Homeostat. It comprised four identical machines (electrical input-output devices), incorporated mutual feedback, and permitted him to observe the behavior, which was the movement of indicators for each machine. Ashby discovered that the Homeostat could learn; he reinforced its responses by physically manipulating the dial of one component to “punish” an incorrect response (e.g., for moving one of its needles in the incorrect direction). Ashby also found that the Homeostat could adapt to two different environments that were alternated from trial to trial. This knowledge was unattainable from mathematical analyses. “A better demonstration can be given by a machine, built so that we know its nature exactly and on which we can observe what will happen in various conditions” (p. 99).
Braitenberg (1984) has argued that an advantage of forward engineering is that it will produce theories that are simpler than those that will be attained by reverse engineering. This is because when complex or surprising behaviors emerge, preexisting knowledge of the components—which were constructed by the researcher— can be used to generate simpler explanations of the behavior.
Analysis is more difficult than invention in the sense in which, generally, induction takes more time to perform than deduction: in induction one has to search for the way, whereas in deduction one follows a straightforward path. (Braitenberg, 1984, p. 20)
Braitenberg called this the law of uphill analysis and downhill synthesis.
Another way in which to consider the law of uphill analysis and downhill synthesis is to apply Simon’s (1969) parable of the ant. If the environment is taken seriously as a contributor to the complexity of the behavior of a situated and embodied agent, then one can take advantage of the agent’s world and propose less complex internal mechanisms that still produce the desired intricate results. This idea is central to the replacement hypothesis that Shapiro (2011) has argued is a fundamental characteristic of embodied cognitive science.