Skip to main content
Social Sci LibreTexts

6.7: The Embodied Approach to Musical Cognition

  • Page ID
    21241
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    A well-established modern view of classical music is that it has meaning, and that its purpose is to convey this meaning in a fashion that is consistent with Reddy’s (1979) conduit metaphor.

    Composers and performers of all cultures, theorists of diverse schools and styles, aestheticians and critics of many different persuasions are all agreed that music has meaning and that this meaning is somehow communicated to both participants and listeners. (Meyer, 1956, p. 1)

    Furthermore, there is a general consensus that the meaning that is communicated is affective, and not propositional, in nature. However, the means by which musical meaning is communicated is subject to a tremendous amount of debate (Meyer, 1956; Robinson, 1997).

    One view of musical communication, consistent with classical cognitive science, is that music is a symbol system. For example, the semiotic view of music is that it is a system of signs that provides a narrative or a discourse (Agawu, 1991, 2009; Austerlitz, 1983; Lidov, 2005; Monelle, 2000; Pekkilä, Neumeyer, & Littlefield, 2006; Tarasti, 1995; Turino, 1999). From this perspective, musical signs are intentional: they are about the tensions or emotions they produce or release in listeners. This approach naturally leads to an exploration of the parallels between music and language (Austerlitz, 1983; Jackendoff, 2009; Lidov, 2005), as well as to the proposal of generative grammars of musical structure (Lerdahl, 2001; Lerdahl & Jackendoff, 1983; Sundberg & Lindblom, 1976). Potential parallels between language and music have led some researchers to describe brain areas for syntax and semantics that are responsible for processing both music and language (Koelsch et al., 2004; Patel, 2003).

    A related view of musical communication, but one more consistent with connectionist than classical cognitive science, is that music communicates emotion but does so in a way that cannot be captured by set of formal rules or laws (Lewis, 1991; Loy, 1991; Minsky, 1981; Todd, 1989). Instead, musical meanings are presumed to be entwined in a complex set of interactions between past experiences and current stimulation, interactions that may be best captured by the types of learning exhibited by artificial neural networks. “Many musical problems that resist formal solutions may turn out to be tractable anyway, in future simulations that grow artificial musical semantic networks” (Minsky, 1981, p. 35).

    Both views of musical meaning described above are consistent with the conduit metaphor, in that they agree that (1) music is intentional and content-bearing (although they disagree about formalizing this content) and (2) that the purpose of music is to communicate this content to audiences. A third approach to musical meaning, most consistent with embodied cognitive science, distinguishes itself from the other two by rejecting the conduit metaphor.

    According to the embodied view (Clarke, 2005; Johnson, 2007; Leman, 2008), the purpose of music is not to acquire abstract or affective content, but to instead directly, interactively, and physically experience music. “People try to be involved with music because this involvement permits an experience of behavioral resonance with physical energy” (Leman, 2008, p. 4).

    The emphasis on direct contact that characterizes the embodied view of music is a natural progression from the autonomist school of musical aesthetics that arose in the nineteenth century (Hanslick, 1957). Music critic Eduard Hanslick (1957) opposed the view that music was representative and that its purpose was to communicate content or affect. For Hanslick, a scientific aesthetics of music was made impossible by sentimental appeals to emotion: “The greatest obstacle to a scientific development of musical aesthetics has been the undue prominence given to the action of music on our feelings” (p. 89).

    As noted previously, Hanslick (1957, p. 48) argued instead that “the essence of music is sound and motion.” The modern embodied approach to music echoes and amplifies this perspective. Johnson (2007) agreed with Hanslick that music is not typically representative or intentional. Instead, Johnson argued that the dynamic nature of music—its motion, in Hanslick’s sense—presents “the flow of human experience, feeling, and thinking in concrete, embodied forms” (p. 236). The motion of music is not communicative, it is causal. “To hear the music is just to be moved and to feel in the precise way that is defined by the patterns of the musical motion” (p. 239). The motion intrinsic to the structure of music is motion that we directly and bodily experience when it is presented to us. Johnson argues that this is why metaphors involving motion are so central to our conceptualization of music.

    “Many people try to get into direct contact with music. Why do they do so? Why do people make great efforts to attend a concert? Why do they invest so much time in learning to play a musical instrument” (Leman, 2008, p. 3). If the meaning of music is the felt movement that it causes, then the need for direct experience of music is completely understandable. This is also reflected in an abandonment of the conduit metaphor. The embodied view of music does not accept the notion that music is a conduit for the transmission of propositional or affective contents. Indeed, it hypothesizes that the rational assessment of music might interfere with how it should best be experienced.

    Activities such as reasoning, interpretation, and evaluation may disturb the feeling of being directly involved because the mind gets involved in a representation of the state of the environment, which distracts the focus and, as a result, may break the ‘magic spell’ of being entrained. (Leman, 2008, p. 5)

    Clearly embodied researchers have a very different view of music than do classical or connectionist researchers. This in turn leads to very different kinds of research on musical cognition than the examples that have been introduced earlier in this chapter.

    To begin, let us consider the implication of the view that listeners should be directly involved with music (Leman, 2008). From this view, it follows that the full appreciation of music requires far more than the cognitive interpretation of auditory stimulation. “It is a matter of corporeal immersion in sound energy, which is a direct way of feeling musical reality. It is less concerned with cognitive reflection, evaluation, interpretation, and description” (Leman, 2008, p. 4). This suggests that cross-modal interactions may be critical determinants of musical experience.

    Some research on musical cognition is beginning to explore this possibility. In one study (Vines et al., 2006) subjects were presented with performances by two clarinetists. Some subjects only heard, some subjects only saw, and some subjects both heard and saw the performances. Compared to the first two groups of subjects, those who both heard and saw the performances had very different experiences. The visual information altered the experience of tension at different points, and the movements of the performers provided additional information that affected the experienced phrasing as well as expectations about emotional content. “The auditory and visual channels mutually enhance one another to convey content, and . . . an emergent quality exists when a musician is both seen and heard” (p. 108).

    In a more recent study, Vines et al. (2011) used a similar methodology, but they also manipulated the expressive style with which the stimulus (a solo clarinet piece composed by Stravinsky) was performed. Subjects were presented with the piece in restrained, standard, or exaggerated fashion. These manipulations of expressive style only affected the subjects who could see the performance. Again, interactions were evident when performances were both seen and heard. For instance, subjects in this condition had significantly higher ratings of “happiness” in comparison to other subjects.

    The visual component of musical performance makes a unique contribution to the communication of emotion from performer to audience. Seeing a musician can augment, complement, and interact with the sound to modify the overall experience of music. (Vines et al., 2011, p. 168)

    Of course, the embodied approach to music makes much stronger claims than that there are interactions between hearing and seeing; it views cognition not as a medium for planning, but instead as a medium for acting. It is not surprising, then, to discover that embodied musical cognition has studied the relationships between music and actions, gestures, and motion in a variety of ways (Gritten & King, 2011).

    One of the most prominent of these relationships involves the exploration of new kinds of musical instruments, called digital musical instruments. A digital musical instrument is a musical instrument that involves a computer and in which the generation of sound is separate from the control interface that chooses sound (Marshall et al. 2009). This distinction is important, because as Marshall et al. (2009) pointed out, there are many available sensors that can register a human agent’s movements, actions, or gestures. These include force sensitive resistors, video cameras, accelerometers, potentiometers, and bend sensors, not to mention buttons and microphones.

    The availability of digital sensors permits movements, actions, and gestures to be measured and used to control the sounds generated by a digital musical instrument. This requires that a mapping be defined from a measured action to a computer-generated sound (Verfaille, Wanderley, & Depalle, 2006). Of course, completely novel relationships between gesture and sound become possible within this framework (Sapir, 2002). This permits the invention of musical instruments that can be played by individuals with no training on an instrument, because they can interact with a digital musical instrument using everyday gestures and actions (Paradiso, 1999).

    The development of digital musical instruments has resulted in the need to study a variety of topics quite different from those examined by classical and connectionist researchers. One important topic involves determining how to use measured actions to control sound production (Verfaille, Wanderley, & Depalle, 2006). However, an equally important topic concerns the nature of the gestures and actions themselves. In particular, researchers of digital musical instruments are concerned with exploring issues related to principles of good design (Dourish, 2001; Norman, 2002, 2004) in order to identify and evaluate possible interfaces between actions and instruments (Magnusson, 2010; O’Modhrain, 2011; Ungvary & Vertegaal, 2000; Wanderley & Orio, 2002). Another issue is to choose a set of actions that can be varied, so that a performer of a digital musical instrument can manipulate its expressiveness (Arfib, Couturier, & Kessous, 2005).

    The development of digital musical instruments has also led to a reevaluation of the roles of composers, performers, and audience. In the acoustic paradigm (Bown, Eldridge, & McCormack, 2009), which adheres to the traditional view of classical music outlined earlier in this chapter, these three components have distinct and separable roles. Digital musical instruments result in the acoustic paradigm being disrupted. Bown, Eldridge, and McCormack (2009) argued that the software components should not be viewed as instruments, but instead as behavioral objects. A behavioral object is “an entity that can act as a medium for interaction between people through its dissemination and evolution, can develop interactively with individuals in processes of creative musical development, and can interact with other behavioral objects to produce musical output” (p. 193); it is behavioral in the sense that it can act and interact, but it is an object in the sense that it is a material thing that can be seen and touched.

    In their role as behavioral objects, digital musical instruments blur the sharp distinctions between the roles defined by the acoustic paradigm (Bown, Eldridge, & McCormack, 2009). This is because their software components dramatically alter the interactions between composer, performer, and listener.

    Interaction does not involve the sharing simply of passive ideas or content, but of potentially active machines that can be employed for musical tasks. Whereas musical ideas may once have developed and circulated far more rapidly than the inanimate physical objects that define traditional musical instruments, software objects can now evolve and move around at just as fast a pace. (Bown, Eldridge, & McCormack, 2009, p. 192)

    The new interactions discussed by Bown, Eldridge, and McCormack (2009) suggested that digital musical instruments can affect musical thought. It has been argued that these new instruments actually scaffold musical cognition, and therefore they extend the musical mind (Magnusson, 2009). According to Magnusson, traditional acoustic instruments have been created in bricoleur fashion by exploring combinations of existing materials, and learning to play such an instrument involves exploring its affordances. “The physics of wood, strings and vibrating membranes were there to be explored and not invented” (p. 174). In contrast, the software of digital musical instruments permits many aspects of musical cognition to be extended into the instrument itself. Digital musical instruments,

    typically contain automation of musical patterns (whether blind or intelligent) that allow the performer to delegate musical actions to the instrument itself, such as playing arpeggios, generating rhythms, expressing spatial dimensions as scales (as opposed to pitches), and so on. (Magnusson, 2009, p. 168)

    The embodied approach is not limited to the study of digital musical instruments. Actions are required to play traditional musical instruments, and such actions have been investigated. For instance, researchers have examined the fingering choices made by pianists as they sight read (Sloboda et al., 1998) and developed ergonomic models of piano fingering (Parncutt et al., 1997). Bowing and fingering movements for string instruments have also been the subject of numerous investigations (Baader, Kazennikov, & Wiesendanger, 2005; Kazennikov & Wiesendange r, 2009; Konczak, van der Velden, & Jaeger, 2009; Maestre et al., 2010; Rasamima nana & Bevilacqua, 2008; Turner-Stokes & Reid, 1999). This research has included the development of the MusicJacket, a worn device that analyzes the movement of a violin player and provides vibrotactile feedback to teach proper bowing (van der Linden et al., 2011). The relationship between alternative flute fingerings and their effect on produced tones have also been examined (Botros, Smith, & Wolfe, 2006; Verfaille, Depalle, & Wanderley, 2010).

    The embodied approach is also actively exploring the possibility that gestural or other kinds of interactions can be used to retrieve digitized music (Casey et al., 2008; Leman, 2008). Personal music collections are becoming vast, and traditional methods of discovering music (i.e., record stores and radio stations) are being replaced by social networking sites and the World Wide Web. As a result, there is a growing need for these large digital collections of music to be searchable. However, the most common approach for cataloguing and searching these collections is to use textual metadata that provides an indirect description of the stored music, such as the name of the composer, the title of the song, or the genre of the music (Leman, 2008).

    The embodied approach is interested in the possibility of using more direct aspects of music to guide such retrieval (Leman, 2008). Is it possible to access music on the basis of one’s personal experience of music? Leman hypothesizes that human action can serve as the basis of a corporeal-based querying system for retrieving music. His idea is to use the body to convert a musical idea (e.g., a desire to retrieve a particular type of music) into musical physical energy that can be mapped onto the profiles of digitized music, permitting content-based retrieval. For instance, one could query a musical database by singing or playing a melody (De Mulder et al., 2006), by manipulating a spatial representation that maps the similarity of stored music (Cooper et al., 2006; Pampalk, Dixon, & Widmer, 2004), or even by making gestures (Ko & Byun, 2002).

    Compared to the other two approaches described in this chapter, the embodied approach to musical cognition is fairly new, and it is not as established. “The hypothesis that musical communication is based on the encoding, transmission, and decoding of intended actions is, I believe, an attractive one. However, at this moment it is more a working hypothesis than an established fact” (Leman, 2008, p. 237). This “working hypothesis,” though, has launched an interesting literature on the study of the relationship between music and action that is easily distinguished from the classical and connectionist research on musical cognition.


    This page titled 6.7: The Embodied Approach to Musical Cognition is shared under a CC BY-NC-ND license and was authored, remixed, and/or curated by Michael R. W. Dawson (Athabasca University Press) .

    • Was this article helpful?