Skip to main content
Social Sci LibreTexts

5.15: The Roots of Forward Engineering

  • Page ID
    41171
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    The most typical methodology to be found in classical cognitive science is reverse engineering. Reverse engineering involves observing the behavior of an intact system in order to infer the nature and organization of the system’s internal processes. Most cognitive theories are produced by using a methodology called functional analysis (Cummins, 1975, 1983), which uses experimental results to iteratively carve a system into a hierarchy of functional components until a basic level of subfunctions, the cognitive architecture, is reached.

    A practical problem with functional analysis or reverse engineering is the frame of reference problem (Pfeifer & Scheier, 1999). This problem arises during the distribution of responsibility for the complexity of behavior between the internal processes of an agent and the external influences of its environment. Classical cognitive science, a major practitioner of functional analysis, endorses the classical sandwich; its functional analyses tend to attribute behavioral complexity to the internal processes of an agent, while at the same time ignoring potential contributions of the environment. In other words, the frame of reference problem is to ignore Simon’s (1969) parable of the ant.

    Embodied cognitive scientists frequently adopt a different methodology, forward engineering. In forward engineering, a system is constructed from a set of primitive functions of interest. The system is then observed to determine whether it generates surprising or complicated behavior. “Only about 1 in 20 ‘gets it’—that is, the idea of thinking about psychological problems by inventing mechanisms for them and then trying to see what they can and cannot do” (Minsky, personal communication, 1995). This approach has also been called synthetic psychology (Braitenberg, 1984). Reverse engineers collect data to create their models; in contrast, forward engineers build their models first and use them as primary sources of data (Dawson, 2004).

    We noted in Chapter 3 that classical cognitive science has descended from the seventeenth-century rationalist philosophy of René Descartes (1960, 1996). It was observed in Chapter 4 that connectionist cognitive science descended from the early eighteenth-century empiricism of John Locke (1977), which was itself a reaction against Cartesian rationalism. The synthetic approach seeks “understanding by building” (Pfeifer & Scheier, 1999), and as such permits us to link embodied cognitive science to another eighteenth-century reaction against Descartes, the philosophy of Giambattista Vico (Vico, 1990, 1988, 2002).

    Vico based his philosophy on the analysis of word meanings. He argued that the Latin term for truth, verum, had the same meaning as the Latin term factum, and therefore concluded that “it is reasonable to assume that the ancient sages of Italy entertained the following beliefs about the true: ‘the true is precisely what is made’” (Vico, 1988, p. 46). This conclusion led Vico to his argument that humans could only understand the things that they made, which is why he studied societal artifacts, such as the law.

    Vico’s work provides an early motivation for forward engineering: “To know (scire) is to put together the elements of things” (Vico, 1988, p. 46). Vico’s account of the mind was a radical departure from Cartesian disembodiment. To Vico, the Latins “thought every work of the mind was sense; that is, whatever the mind does or undergoes derives from contact with bodies” (p. 95). Indeed, Vico’s verum-factum principle is based upon embodied mentality. Because the mind is “immersed and buried in the body, it naturally inclines to take notice of bodily things” (p. 97).

    While the philosophical roots of forward engineering can be traced to Vico’s eighteenth-century philosophy, its actual practice—as far as cognitive science is concerned—did not emerge until cybernetics arose in the 1940s. One of the earliest examples of synthetic psychology was the Homeostat (Ashby, 1956, 1960), which was built by cyberneticist William Ross Ashby in 1948. The Homeostat was a system that changed its internal states to maximize stability amongst the interactions between its internal components and the environment. William Grey Walter (1963, p. 123) noted that it was “like a fireside cat or dog which only stirs when disturbed, and then methodically finds a comfortable position and goes to sleep again.”

    Ashby’s (1956, 1960) Homeostat illustrated the promise of synthetic psychology. The feedback that Ashby was interested in could not be analyzed mathematically; it was successfully studied synthetically with Ashby’s device. Remember, too, that when the Homeostat was created, computer simulations of feedback were still in the future.

    As well, it was easier to produce interesting behavior in the Homeostat than it was to analyze it. This is because the secret to its success was a large number of potential internal states, which provided many degrees of freedom for producing stability. At the same time, this internal variability was an obstacle to traditional analysis. “Although the machine is man-made, the experimenter cannot tell at any moment exactly what the machine’s circuit is without ‘killing’ it and dissecting out the ‘nervous system’” (Grey Walter, 1963, p. 124).

    Concerns about this characteristic of the Homeostat inspired the study of the first autonomous robots, created by cyberneticist William Grey Walter (1950a, 1950b, 1951, 1963). The first two of these machines were constructed in 1948 (de Latil, 1956); comprising surplus war materials, their creation was clearly an act of bricolage. “The first model of this species was furnished with pinions from old clocks and gas meters” (Grey Walter, 1963, p. 244). By 1951, these two had been replaced by six improved machines (Holland, 2003a), two of which are currently displayed in museums.

    The robots came to be called Tortoises because of their appearance: they seemed to be toy tractors surrounded by a tortoise-like shell. Grey Walter viewed them as an artificial life form that he classified as Machina speculatrix. Machina speculatrix was a reaction against the internal variability in Ashby’s Homeostat. The goal of Grey Walter’s robotics research was to explore the degree to which one could produce complex behavior from such very simple devices (Boden, 2006). When Grey Walter modeled behavior he “was determined to wield Occam’s razor. That is, he aimed to posit as simple a mechanism as possible to explain apparently complex behavior. And simple, here, meant simple” (Boden, 2006, p. 224). Grey Walter restricted a Tortoise’s internal components to “two functional elements: two miniature radio tubes, two sense organs, one for light and the other for touch, and two effectors or motors, one for crawling and the other for steering” (Grey Walter, 1950b, p. 43).

    The interesting behavior of the Tortoises was a product of simple reflexes that used detected light (via a light sensor mounted on the robot’s steering column) and obstacles (via movement of the robot’s shell) to control the actions of the robot’s two motors. Light controlled motor activity as follows. In dim light, the Tortoise’s drive motor would move the robot forward, while the steering motor slowly turned the front wheel. Thus in dim light the Tortoise “explored.” In moderate light, the drive motor continued to run, but the steering motor stopped. Thus in moderate light the Tortoise “approached.” In bright light, the drive motor continued to run, but the steering motor ran at twice the normal speed, causing marked oscillatory movements. Thus in bright light the Tortoise “avoided.”

    The motors were affected by the shell’s sense of touch as follows. When the Tortoise’s shell was moved by an obstacle, an oscillating signal was generated that first caused the robot to drive fast while slowly turning, and then to drive slowly while quickly turning. The alternation of these behaviors permitted the Tortoise to escape from obstacles. Interestingly, when movement of the Tortoise shell triggered such behavior, signals from the photoelectric cell were rendered inoperative for a few moments. Thus Grey Walter employed a simple version of what later would be known as Brooks’ (1999) subsumption architecture: a higher layer of touch processing could inhibit a lower layer of light processing.

    In accordance with forward engineering, after Grey Walter constructed his robots, he observed their behavior by recording the paths that they took in a number of simple environments. He preserved a visual record of their movement by using time-lapse photography; because of lights mounted on the robots, their paths were literally traced on each photograph (Holland, 2003b). Like the paths on the beach traced in Simon’s (1969) parable of the ant, the photographs recorded Tortoise behavior that was “remarkably unpredictable” (Grey Walter, 1950b, p. 44).

    Grey Walter observed the behaviors of his robots in a number of different environments. For example, in one study the robot was placed in a room where a light was hidden from view by an obstacle. The Tortoise began to explore the room, bumped into the obstacle, and engaged in its avoidance behavior. This in turn permitted the robot to detect the light, which it approached. However, it didn’t collide with the light. Instead the robot circled it cautiously, veering away when it came too close. “Thus the machine can avoid the fate of the moth in the candle” (Grey Walter, 1963, p. 128).

    When the environment became more complicated, so too did the behaviors produced by the Tortoise. If the robot was confronted with two stimulus lights instead of one, it would first be attracted to one, which it circled, only to move away and circle the other, demonstrating an ability to choose: it solved the problem “of Buridan’s ass, which starved to death, as some animals acting trophically in fact do, because two exactly equal piles of hay were precisely the same distance away” (Grey Walter, 1963, p. 128).

    If a mirror was placed in its environment, the mirror served as an obstacle, but it reflected the light mounted on the robot, which was an attractant. The resulting dynamics produced the so-called “mirror dance” in which the robot,

    lingers before a mirror, flickering, twittering and jigging like a clumsy Narcissus. The behavior of a creature thus engaged with its own reflection is quite specific, and on a purely empirical basis, if it were observed in an animal, might be accepted as evidence of some degree of self-awareness. (Grey Walter, 1963, pp. 128–129)

    In less controlled or open-ended environments, the behavior that was produced was lifelike in its complexity. The Tortoises produced “the exploratory, speculative behavior that is so characteristic of most animals” (Grey Walter, 1950b, p. 43). Examples of such behavior were recounted by cyberneticist Pierre de Latil (1956):

    Elsie moved to and fro just like a real animal. A kind of head at the end of a long neck towered over the shell, like a lighthouse on a promontory and, like a lighthouse; it veered round and round continuously. (de Latil, 1956, p. 209)

    The Daily Mail reported that,

    the toys possess the senses of sight, hunger, touch, and memory. They can walk about the room avoiding obstacles, stroll round the garden, climb stairs, and feed themselves by automatically recharging six-volt accumulators from the light in the room. And they can dance a jig, go to sleep when tired, and give an electric shock if disturbed when they are not playful. (Holland, 2003a, p. 2090)

    Grey Walter released the Tortoises to mingle with the audience at a 1955 meeting of the British Association (Hayward, 2001): “The tortoises, with their in-built attraction towards light, moved towards the pale stockings of the female delegates whilst avoiding the darker legs of the betrousered males” (p. 624).

    Grey Walter was masterfully able to promote his work to the general public (Hayward, 2001; Holland, 2003a). However, he worried that public reception of his machines would decrease their scientific importance. History has put such concerns to rest; Grey Walter’s pioneering research has influenced many modern researchers (Reeve & Webb, 2003). Grey Walter’s,

    ingenious devices were seriously intended as working models for understanding biology: a ‘mirror for the brain’ that could both generally enrich our understanding of principles of behavior (such as the complex outcome of combining simple tropisms) and be used to test specific hypotheses (such as Hebbian learning). (Reeve & Webb, 2003, p. 2245)


    This page titled 5.15: The Roots of Forward Engineering is shared under a not declared license and was authored, remixed, and/or curated by Michael R. W. Dawson (Athabasca University Press) .

    • Was this article helpful?