Skip to main content
Social Sci LibreTexts

2.9: Algorithms from Artifacts

  • Page ID
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)

    Neuroscientist Valentino Braitenberg imagined a world comprising domains of both water and land (Braitenberg, 1984). In either of these domains one would find a variety of agents who sense properties of their world, and who use this information to guide their movements through it. Braitenberg called these agents “vehicles.” In Braitenberg’s world of vehicles, scientists encounter these agents and attempt to explain the internal mechanisms that are responsible for their diverse movements. Many of these scientists adopt what Braitenberg called an analytic perspective: they infer internal mechanisms by observing how external behaviours are altered as a function of specific changes in a vehicle’s environment. What Braitenberg called analysis is also called reverse engineering.

    We saw earlier that a Turing machine generates observable behaviour as it calculates the answer to a question. A description of a Turing machine’s behaviours— be they by design or by artifact—would provide the sequence of operations that were performed to convert an input question into an output answer. Any sequence of steps which, when carried out, accomplishes a desired result is called an algorithm (Berlinski, 2000). The goal, then, of reverse engineering a Turing machine or any other calculating device would be to determine the algorithm it was using to transform its input into a desired output.

    Calculating devices exhibit two properties that make their reverse engineering difficult. First, they are often what are called black boxes. This means that we can observe external behaviour, but we are unable to directly observe internal properties. For instance, if a Turing machine was a black box, then we could observe its movements along, and changing of symbols on, the tape, but we could not observe the machine state of the machine head.

    Second, and particularly if we are faced with a black box, another property that makes reverse engineering challenging is that there is a many-to-one relationship between algorithm and mapping. This means that, in practice, a single input-output mapping can be established by one of several different algorithms. For example, there are so many different methods for sorting a set of items that hundreds of pages are required to describe the available algorithms (Knuth, 1997). In principle, an infinite number of different algorithms exist for computing a single input-output mapping of interest (Johnson-Laird, 1983).

    The problem with reverse engineering a black box is this: if there are potentially many different algorithms that can produce the same input-output mapping, then mere observations of input-output behaviour will not by themselves indicate which particular algorithm is used in the device’s design. However, reverse engineering a black box is not impossible. In addition to the behaviours that it was designed to produce, the black box will also produce artifacts. Artifacts can provide great deal of information about internal and unobservable algorithms.

    Imagine that we are faced with reverse engineering an arithmetic calculator that is also a black box. Some of the artifacts of this calculator provide relative complexity evidence (Pylyshyn, 1984). To collect such evidence, one could conduct an experiment in which the problems presented to the calculator were systematically varied (e.g., by using different numbers) and measurements were made of the amount of time taken for the correct answer to be produced. To analyze this relative complexity evidence, one would explore the relationship between characteristics of problems and the time required to solve them.

    For instance, suppose that one observed a linear increase in the time taken to solve the problems 9 × 1, 9 × 2, 9 × 3, et cetera. This could indicate that the device was performing multiplication by doing repeated addition (9, 9 + 9, 9 + 9 + 9, and so on) and that every “+ 9” operation required an additional constant amount of time to be carried out. Psychologists have used relative complexity evidence to investigate cognitive algorithms since Franciscus Donders invented his subtractive method in 1869 (Posner, 1978).

    Artifacts can also provide intermediate state evidence (Pylyshyn, 1984). Intermediate state evidence is based upon the assumption that an input-output mapping is not computed directly, but instead requires a number of different stages of processing, with each stage representing an intermediate result in a different way. To collect intermediate state evidence, one attempts to determine the number and nature of these intermediate results.

    For some calculating devices, intermediate state evidence can easily be collected. For instance, the intermediate states of the Turing machine›s tape, the abacus’ beads or the difference engine’s gears are in full view. For other devices, though, the intermediate states are hidden from direct observation. In this case, clever techniques must be developed to measure internal states as the device is presented with different inputs. One might measure changes in electrical activity in different components of an electronic calculator as it worked, in an attempt to acquire intermediate state evidence.

    Artifacts also provide error evidence (Pylyshyn, 1984), which may also help to explore intermediate states. When extra demands are placed on a system’s resources, it may not function as designed, and its internal workings are likely to become more evident (Simon, 1969). This is not just because the overtaxed system makes errors in general, but because these errors are often systematic, and their systematicity reflects the underlying algorithm.

    Because we rely upon their accuracy, we would hope that error evidence would be difficult to collect for most calculating devices. However, error evidence should be easily available for calculators that might be of particular interest to us: humans doing mental arithmetic. We might find, for instance, that overtaxed human calculators make mistakes by forgetting to carry values from one column of numbers to the next. This would provide evidence that mental arithmetic involved representing numbers in columnar form, and performing operations column by column (Newell & Simon, 1972). Very different kinds of errors would be expected if a different approach was taken to perform mental arithmetic, such as imagining and manipulating a mental abacus (Hatano, Miyake, & Binks, 1977).

    In summary, discovering and describing what algorithm is being used to calculate an input-output mapping involves the systematic examination of behaviour. That is, one makes and interprets measurements that provide relative complexity evidence, intermediate state evidence, and error evidence. Furthermore, the algorithm that will be inferred from such measurements is in essence a sequence of actions or behaviours that will produce a desired result.

    The discovery and description of an algorithm thus involves empirical methods and vocabularies, rather than the formal ones used to account for input-output regularities. Just as it would seem likely that input-output mappings would be the topic of interest for formal researchers such as cyberneticists, logicians, or mathematicians, algorithmic accounts would be the topic of interest for empirical researchers such as experimental psychologists.

    The fact that computational accounts and algorithmic accounts are presented in different vocabularies suggests that they describe very different properties of a device. From our discussion of black boxes, it should be clear that a computational account does not provide algorithmic details: knowing what input-output mapping is being computed is quite different from knowing how it is being computed. In a similar vein, algorithmic accounts are silent with respect to the computation being carried out.

    For instance, in Understanding Cognitive Science, Dawson (1998) provides an example machine table for a Turing machine that adds pairs of integers. Dawson also provides examples of questions to this device (e.g., strings of blanks, 0s, and 1s) as well as the answers that it generates. Readers of Understanding Cognitive Science can pretend to be the machine head by following the instructions of the machine table, using pencil and paper to manipulate a simulated ticker tape. In this fashion they can easily convert the initial question into the final answer—they fully understand the algorithm. However, they are unable to say what the algorithm accomplishes until they read further in the book.

    2.9: Algorithms from Artifacts is shared under a CC BY-NC-ND license and was authored, remixed, and/or curated by Michael R. W. Dawson (Athabasca University Press) .

    • Was this article helpful?