Animism is the assignment of lifelike properties to inanimate, but moving, objects. Animism characterizes the thinking of young children, who may believe that a car, for instance, is alive because it can move on its own (Piaget, 1929). Animism was also apparent in the occult tradition of the Renaissance; the influential memory systems of Lull and of Bruno imbued moving images with powerful, magical properties (Yates, 1966).
Animism was important to the development of scientific and mathematical methods in the seventeenth century: “The Renaissance conception of an animistic universe, operated by magic, prepared the way for a conception of a mechanical universe, operated by mathematics” (Yates, 1966, p. 224). Note the animism in the introduction to Hobbes’ (1967) Leviathan:
For seeing life is but a motion of limbs, the beginning whereof is in some principal part within; why may we not say, that all Automata (Engines that move themselves by means of springs and wheeles as doth a watch) have an artificial life? For what is the Heart, but a Spring; and the Nerves, but so many Springs; and the Joynts, but so many Wheeles, giving motion to the whole Body, such as was intended by the Artificer? (Hobbes, 1967, p. 3)
Such appeals to animism raised new problems. How were moving humans to be distinguished from machines and animals? Cartesian philosophy grounded humanity in mechanistic principles, but went on to distinguish humans-as-machines from animals because only the former possessed a soul, whose essence was “only to think” (Descartes, 1960, p. 41).
Seventeenth-century philosophy was the source of the mechanical view of man (Grenville, 2001; Wood, 2002). It was also the home of a reverse inquiry: was it possible for human artifacts, such as clockwork mechanisms, to become alive or intelligent?
By the eighteenth century, such philosophical ponderings were fuelled by “living machines” that had made their appearance to great public acclaim. Between 1768 and 1774, Pierre and Henri-Louis Jaquet-Droz constructed elaborate clockwork androids that wrote, sketched, or played the harpsichord (Wood, 2002). The eighteenth-century automata of Jacques de Vaucanson, on display for a full century, included a flute player and a food-digesting duck. Von Kempelen’s infamous chess-playing Turk first appeared in 1770; it was in and out of the public eye until its destruction by fire in 1854 (Standage, 2002).
Wood (2002, p. xxvii) notes that all automata are presumptions “that life can be simulated by art or science or magic. And embodied in each invention is a riddle, a fundamental challenge to our perception of what makes us human.” In the eighteenth century, this challenge attracted the attention of the Catholic Church. In 1727, Vaucanson’s workshop was ordered destroyed because his clockwork servants, who served dinner and cleared tables, were deemed profane (Wood, 2002). The Spanish Inquisition imprisoned both Pierre Jaquet-Droz and his writing automaton!
In spite of the Church’s efforts, eighteenth-century automata were popular, tapping into a nascent fascination with the possibility of living machines. This fascination has persisted uninterrupted to the present day, as evidenced by the many depictions of robots and cyborgs in popular fiction and films (Asimov, 2004; Caudill, 1992; Grenville, 2001; Ichbiah, 2005; Levin, 2002; Menzel, D’Aluisio, & Mann, 2000).
Not all modern automata were developed as vehicles of entertainment. The late 1940s saw the appearance of the first autonomous robots, which resembled, and were called, Tortoises (Grey Walter, 1963). These devices provided “mimicry of life” (p. 114) and were used to investigate the possibility that living organisms were simple devices that were governed by basic cybernetic principles. Nonetheless, Grey Walter worried that animism might discredit the scientific merit of his work:
We are daily reminded how readily living and even divine properties are projected into inanimate things by hopeful but bewildered men and women; and the scientist cannot escape the suspicion that his projections may be psychologically the substitutes and manifestations of his own hope and bewilderment. (Grey Walter, 1963, p. 115)
While Grey Walter’s Tortoises were important scientific contributions (Bladin, 2006; Hayward, 2001; Holland, 2003b; Sharkey & Sharkey, 2009), the twentieth century saw the creation of another, far more important, automaton: the digital computer. The computer is rooted in seventeenth-century advances in logic and mathematics. Inspired by the Cartesian notion of rational, logical, mathematical thought, the computer brought logicism to life.
Logicism is the idea that thinking is identical to performing logical operations (Boole, 2003). By the end of the seventeenth century, numerous improvements to Boole’s logic led to the invention of machines that automated logical operations; most of these devices were mechanical, but electrical logic machines had also been conceived (Buck & Hunka, 1999; Jevons, 1870; Marquand, 1885; Mays, 1953). If thinking was logic, then thinking machines—machines that could do logic—existed in the late nineteenth century.
The logic machines of the nineteenth century were, in fact, quite limited in ability, as we see later in this chapter. However, they were soon replaced by much more powerful devices. In the first half of the twentieth century, the basic theory of a general computing mechanism had been laid out in Alan Turing’s account of his universal machine (Hodges, 1983; Turing, 1936). The universal machine was a device that “could simulate the work done by any machine. . . . It would be a machine to do everything, which was enough to give anyone pause for thought” (Hodges, 1983, p. 104). The theory was converted into working universal machines—electronic computers—by the middle of the twentieth century (Goldstine, 1993; Reid, 2001; Williams, 1997).
The invention of the electronic computer made logicism practical. The computer’s general ability to manipulate symbols made the attainment of machine intelligence seem plausible to many, and inevitable to some (Turing, 1950). Logicism was validated every time a computer accomplished some new task that had been presumed to be the exclusive domain of human intelligence (Kurzweil, 1990, 1999). The pioneers of cognitive science made some bold claims and some aggressive predictions (McCorduck, 1979): in 1956, Herbert Simon announced to a mathematical modelling class that “Over Christmas Allen Newell and I invented a thinking machine” (McCorduck, 1979, p. 116). It was predicted that by the late 1960s most theories in psychology would be expressed as computer programs (Simon & Newell, 1958).
The means by which computers accomplished complex information processing tasks inspired theories about the nature of human thought. The basic workings of computers became, at the very least, a metaphor for the architecture of human cognition. This metaphor is evident in philosophy in the early 1940s (Craik, 1943).
My hypothesis then is that thought models, or parallels, reality—that its essential feature is not ‘the mind,’ ‘the self,’ ‘sense data’ nor ‘propositions,’ but is symbolism, and that this symbolism is largely of the same kind which is familiar to us in mechanical devices which aid thought and calculation. (Craik, 1943, p. 57)
Importantly, many modern cognitive scientists do not see the relationship between cognition and computers as being merely metaphorical (Pylyshyn, 1979a, p. 435): “For me, the notion of computation stands in the same relation to cognition as geometry does to mechanics: It is not a metaphor but part of a literal description of cognitive activity.”
Computers are special devices in another sense: in order to explain how they work, one must look at them from several different perspectives. Each perspective requires a radically different vocabulary to describe what computers do. When cognitive science assumes that cognition is computation, it also assumes that human cognition must be explained using multiple vocabularies.
In this chapter, I provide an historical view of logicism and computing to introduce these multiple vocabularies, describe their differences, and explain why all are needed. We begin with the logicism of George Boole, which, when transformed into modern binary logic, defined the fundamental operations of modern digital computers.