Skip to main content
Social Sci LibreTexts

3.2: Why sonify non-sonic information?

  • Page ID
    129501
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Since visual display has become the dominant form of communicating data, one might wonder why we would consider auditory display and sonification at all. This issues has been discussed fairly extensively in various contexts (e.g., Hermann et al., 2011; Kramer, 2000; Nees & Walker, 2009; Sanderson, 2006). Briefly however, the auditory system excels and perhaps outperforms the visual system, in a number of important ways that are relevant to auditory display and sonification.

    For one, the auditory system excels at detecting rhythmic and temporal changes. For example, we can perceptually separate two brief sounds like finger snap or metronome tick with as little as five milliseconds separating them; far better than the 35-40 milliseconds required by the visual system (Ashmead, Leroy, & Odom, 1990; Gfeller, Woodworth, Robin, Witt, & Knutson, 1997; Tervaniemi & Brattico, 2004). Therefore, more information can be displayed in audition, compressed at a higher rate, and still maintain discriminability. Similarly, the auditory system is highly sensitive to temporal changes and pattern deviations (Escera, Alho, Winkler, & Näätänen, 1998; Näätänen, Paavilainen, Rinne, & Alho, 2007). As a result, auditory display may be well-suited to data sets that contain complex patterns and temporal changes.

    More practically speaking, audition is omnidirectional, not requiring the listener to be oriented towards the display. This is especially important given that most of our primary tasks in our work environments are visual, restricting our ability to orient to other displays. Therefore, adding more visual information may be inappropriate because the visual system might already be occupied (Fitch & Kramer, 1994; Wickens & Liu, 1988) or, by adding additional visual displays, we may be overtaxing an already overburdened visual system (M. L. Brown, Newsome, & Glinert, 1989). Additionally, we are able time-share multiple tasks more efficiently when they are presented in different modalities (Driver, 2001; Driver & Spence, 1998; Wickens, 2002; Wickens, Parasuraman, & Davies, 1984). Therefore, sonification presents an opportunity to present additional information and augment task performance without interfering with, or overloading the visual system.


    This page titled 3.2: Why sonify non-sonic information? is shared under a not declared license and was authored, remixed, and/or curated by Matthew J. C. Crump via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

    • Was this article helpful?