# 2.8: Formal Accounts of Input-Output Mappings

- Page ID
- 35706

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

For a cyberneticist, a machine was simply a device for converting some input into some output—and nothing more (Ashby, 1956, 1960; Wiener, 1948, 1964). A cyberneticist would be concerned primarily with describing a machine such as a calculating device in terms of its input-output mapping. However, underlying this simple definition was a great deal of complexity.

First, cybernetics was not interested in the relation between a particular input and output, but instead was interested in a general account of a machine’s possible behaviour “by asking not ‘what individual act will it produce here and now?’ but ‘what are all the possible behaviours that it can produce?’” (Ashby, 1956, p. 3).

Second, cybernetics wanted not only to specify what possible input-outputs could be generated by a device, but also to specify what behaviours could not be generated, and why: “Cybernetics envisages a set of possibilities much wider than the actual, and then asks why the particular case should conform to its usual particular restriction” (Ashby, 1956, p. 3).

Third, cybernetics was particularly concerned about machines that were nonlinear, dynamic, and adaptive, which would result in very complex relations between input and output. The nonlinear relationships between four simple machines that interact with each other in a network are so complex that they are mathematically intractable (Ashby, 1960).

Fourth, cybernetics viewed machines in a general way that not only ignored their physical nature, but was not even concerned with whether a particular machine had been (or could be) constructed or not. “What cybernetics offers is the framework on which all individual machines may be ordered, related and understood” (Ashby, 1956, p. 2).

How could cybernetics study machines in such a way that these four different perspectives could be taken? To accomplish this, the framework of cybernetics was exclusively mathematical. Cyberneticists investigated the input-output mappings of machines by making general statements or deriving proofs that were expressed in some logical or mathematical formalism.

By the late 1950s, research in cybernetics proper had begun to wane (Conway & Siegelman, 2005); at this time cybernetics began to evolve into the modern field of cognitive science (Boden, 2006; Gardner, 1984; Miller, 2003). Inspired by advances in digital computers, cognitive science was not interested in generic “machines” as such, but instead focused upon particular devices that could be described as information processors or symbol manipulators.

Given this interest in symbol manipulation, one goal of cognitive science is to describe a device of interest in terms of the specific information processing problem that it is solving. Such a description is the result of performing an analysis at the computational level (Dawson, 1998; Marr, 1982; Pylyshyn, 1984).

A computational analysis is strongly related to the formal investigations carried out by a cyberneticist. At the computational level of analysis, cognitive scientists use formal methods to prove what information processing problems a system can—and cannot—solve. The formal nature of computational analyses lend them particular authority: “The power of this type of analysis resides in the fact that the discovery of valid, sufficiently universal constraints leads to conclusions . . . that have the same permanence as conclusions in other branches of science” (Marr, 1982, p. 331).

However, computational accounts do not capture all aspects of information processing. A proof that a device is solving a particular information processing problem is only a proof concerning the device’s input-output mapping. It does not say what algorithm is being used to compute the mapping or what physical aspects of the device are responsible for bringing the algorithm to life. These missing details must be supplied by using very different methods and vocabularies.