In the last several sections we have explored connectionist cognitive science at the computational level of analysis. Claims about linear separability, the in-principle power of multilayer networks, and the interpretation of output unit activity have all been established using formal analyses.
In the next few sections we consider connectionist cognitive science from another perspective that it shares with classical cognitive science: the use of algorithmic-level investigations. The sections that follow explore how modern networks, which develop internal representations with hidden units, are trained, and also describe how one might interpret the internal representations of a network after it has learned to accomplish a task of interest. Such interpretations answer the question How does a network convert an input pattern into an output response? — and thus provide information about network algorithms.
The need for algorithmic-level investigations is introduced by noting in the next section Section 4.9 that most modern connectionist networks are multilayered, meaning that they have at least one layer of hidden units lying between the input units and the output units. This section introduces a general technique for training such networks, called the generalized delta rule. This rule extends empiricism to systems that can have powerful internal representations.
Section 4.10 provides one example of how the internal representations created by the generalized delta rule can be interpreted. It describes the analysis of a multilayered network that has learned to classify different types of musical chords. An examination of the connection weights between the input units and the hidden units reveals a number of interesting ways in which this network represents musical regularities. An examination of the network’s hidden unit space shows how these musical regularities permit the network to rearrange different types of chord types so that they may then be carved into appropriate decision regions by the output units.
In section 4.11 a biologically inspired approach to discovering network algorithms is introduced. This approach involves wiretapping the responses of hidden units when the network is presented with various stimuli, and then using these responses to determine the trigger features that the hidden units detect. It is also shown that changing the activation function of a hidden unit can lead to interesting complexities in defining the notion of a trigger feature, because some kinds of hidden units capture families of trigger features that require further analysis.
In Section 4.12 we describe how interpreting the internal structure of a network begins to shed light on the relationship between algorithms and architectures. Also described is a network that, as a result of training, translates a classical model of a task into a connectionist one. This illustrates an intertheoretic reduction between classical and connectionist theories, raising the possibility that both types of theories can be described in the same architecture.