In the preceding sections some of the basic characteristics of connectionist networks were presented. These elements of connectionist cognitive science have emerged as a reaction against key assumptions of classical cognitive science. Connectionist cognitive scientists replace rationalism with empiricism, and recursion with chains of associations.
Although connectionism reacts against many of the elements of classical cognitive science, there are many similarities between the two. In particular, the multiple levels of analysis described in Chapter 2 apply to connectionist cognitive science just as well as they do to classical cognitive science (Dawson, 1998). The next two sections of this chapter focus on connectionist research in terms of one of these, the computational level of investigation.
Connectionism’s emphasis on both empiricism and associationism has raised the spectre, at least in the eyes of many classical cognitive scientists, of a return to the behaviourism that cognitivism itself revolted against. When cognitivism arose, some of its early successes involved formal proofs that behaviourist and associationist theories were incapable of accounting for fundamental properties of human languages (Bever, Fodor, & Garrett, 1968; Chomsky, 1957, 1959b, 1965, 1966). With the rise of modern connectionism, similar computational arguments have been made against artificial neural networks, essentially claiming that they are not sophisticated enough to belong to the class of universal machines (Fodor & Pylyshyn, 1988).
In Section 4.6, “Beyond the Terminal Meta-postulate,” we consider the in-principle power of connectionist networks, beginning with two different types of tasks that networks can be used to accomplish. One is pattern classification: assigning an input pattern in an all-or-none fashion to a particular category. A second is function approximation: generating a continuous response to a set of input values.
Section 4.6 then proceeds to computational analyses of how capable networks are of accomplishing these tasks. These analyses prove that networks are as powerful as need be, provided that they include hidden units. They can serve as arbitrary pattern classifiers, meaning that they can solve any pattern classification problem with which they are faced. They can also serve as universal function approximators, meaning that they can fit any continuous function to an arbitrary degree of precision. This computational power suggests that artificial neural networks belong to the class of universal machines. The section ends with a brief review of computational analyses, which conclude that connectionist networks indeed can serve as universal Turing machines and are therefore computationally sophisticated enough to serve as plausible models for cognitive science.
Computational analyses need not limit themselves to considering the general power of artificial neural networks. Computational analyses can be used to explore more specific questions about networks. This is illustrated in Section 4.7, “What Do Output Unit Activities Represent?” in which we use formal methods to answer the question that serves as the section’s title. The section begins with a general discussion of theories that view biological agents as intuitive statisticians who infer the probability that certain events may occur in the world (Peterson & Beach, 1967; Rescorla, 1967, 1968). An empirical result is reviewed that suggests artificial neural networks are also intuitive statisticians, in the sense that the activity of an output unit matches the probability that a network will be “rewarded” (i.e., trained to turn on) when presented with a particular set of cues (Dawson et al., 2009).
The section then ends by providing an example computational analysis: a formal proof that output unit activity can indeed literally be interpreted as a conditional probability. This proof takes advantage of known formal relations between neural networks and the Rescorla-Wagner learning rule (Dawson, 2008; Gluck & Bower, 1988; Sutton & Barto, 1981), as well as known formal relations between the RescorlaWagner learning rule and contingency theory (Chapman & Robbins, 1990).