Advertisement

Musical Rhythm Embedded in the Brain: Bridging Music, Neuroscience, and Empirical Aesthetics

  • Sylvie Nozaradan
Chapter

Abstract

Entrainment to music seems ubiquitous in human cultures. The impact of musical features on individuals has already been explored extensively in music theory, anthropology and psychology. In contrast, it is a relatively new field in neuroscience. Recently, a wave of neuroscience research has grown up exploring the interaction with music in both human and non-human brains, and in evolutionary terms. This chapter briefly reviews some of the biological evidence of music processing, particularly focusing on how the human brain interacts with musical rhythm. The neural entrainment to musical rhythm is proposed as a model particularly well-suited to address objectively, within an experimental set up, how biological rules shape music perception within a limited range of complexity. However, these limits are not fixed. Other aspects such as familiarity, culture, training and context continuously shape brain responses to rhythms and to music in general. Taken together, these studies propose answers to the question of how natural and cultural constraints shape each other, building a vivid motor of aesthetic evolution.

Keywords

Music cognition Musical rhythm perception Neuroimaging Empirical aesthetics Art and science 

1 Introduction: Music and Neuroscience

Musial sound seems to have a large impact on a human being. Typically, it is a highly rewarding stimulus for human listeners. For instance, it is played in restaurants and department stores, as it has been shown to improve sales, presumably because of its positive influence on mood (Bruner 1990; North et al. 2003). Listening to our favourite music activates in the human brain the same reward pathways that are stimulated by food, cocaine and sex for instance (Blood and Zatorre 2001; Salimpoor et al. 2011). The reward of listening to music motivates to consume a large amount of music, and to expend important resources for it.

The impact of musical features on individuals has already been explored extensively in music theory, anthropology and psychology. In contrast, it is a relatively new field in neuroscience. For neuroscientists, there are many open questions concerning the nature of our response to music, including where music is processed within the human brain (Peretz and Zatorre 2003), whether it is related to other cognitive abilities such as language (Patel 2008), or why music entrains people to move the body synchronously. Hence, studying the biological foundations underlying the perception and production of music could provide insights not only on the social and individual aspect of this human behaviour, but also on numerous fundamental brain mechanisms.

Natural and cultural constraints continuously interact with each other. Natural constraints include physical and biological rules, whereas cultural constraints are conditions imposing the limits on what is appreciated and considered as valid within a given culture (Leman 2008). Although natural and cultural constraints differ fundamentally in their essence and dynamics, their continuous interaction is thought to constitute the motor of a cultural evolution and of the evolution of artistic expression (Leman 2008). In other words, this interaction could be responsible for the fact that a given music is found pleasant or not within a given context. Taken from the perspective of research on the neuroscience of music, the interaction between natural and cultural constraints could correspond to the interplay between how the human brain shapes music perception and performance on the one hand, and in turn, how music shapes the brain structure and function throughout the life span.

2 Entrainment to Musical Rhythms

In all societies, the primary function of music is often considered to be a collective function, to bind people and increase cohesion within a group of individuals (Sacks 2008). People sing and dance together in every culture, and these joint behaviours are thought to have taken place throughout the history of Homo sapiens. Also, these musical abilities develop spontaneously, by simple exposure to music within a given culture, and a possible initial trigger appears to lie in maternal vocal singing. Mothers sing to their children, in all known cultures (Phillips-Silver and Keller 2012).

Entrainment to music often refers to the spatiotemporal coordination of one, two or more individuals in response to rhythmic sounds (Phillips-Silver et al. 2010; Phillips-Silver and Keller 2012). People often experience musical entrainment in the automatic, even uncontrollable head-bobbing or foot-tapping that occurs when listening to music containing a regular beat for instance. Although it is an extremely common behaviour, moving in sync with music, also referred to as sensorimotor synchronization (i.e. the synchronization of body movement to an external sensory input such as sounds) is a highly complex activity, which involves auditory, but also visual, or tactile perception. It also requires attention, body movement performance and coordination within and across individuals (Phillips-Silver et al. 2010; Todd et al. 2002). Hence, a large network of brain structures is involved during entrainment to music (Zatorre et al. 2007; Grahn 2012). There is relatively recent and growing interest in understanding the functional and neural mechanisms of the entrainment to music, as it may constitute a unique gateway to understanding how the human brain functions. The present chapter focuses on this research question, as a model illustrating how neuroscience and music can learn from each other, and how biological and cultural constraints interact to build a sense of aesthetics.

The beat, which usually refers to the perception of periodicities while listening to music, can be considered as a cornerstone of music and dance behaviours. Even when music is not strictly periodic, humans perceive periodic pulses and spontaneously entrain their body to these beats (London 2004). The beats can be grouped or subdivided in metres, which correspond to grouping or subdivisions of the beat over time (e.g., the metre of a waltz, which is a 3-beat metre, corresponds to the grouping of 3 beats in a measure; it thus has a frequency of f/3, f being the frequency of the beats). Typically, beat and metre perception is known to occur within a specific frequency range corresponding to the musical tempo (i.e. around 0.5–5 Hertz, or Hz, corresponding to 0.5–5 beats per second) (van Noorden and Moelants 1999; Repp 2005). A major goal in this research area is to narrow the gap between the entrainment to musical rhythms as a human behaviour on the one hand and phenomena of entrainment in the human brain activity on the other hand.

3 Evolutionary Status of Musical Rhythms

Why is our species so musical? There is a vigorous debate over the evolutionary status of music, and musical rhythms. Some argue that humans have been shaped by evolution to be musical (Wallin et al. 2000). This was first proposed by Charles Darwin in 1871, who referred to music and dance as courtship displays. In line with this view, a number of hypotheses have been proposed about the possible adaptive roles of music, and musical rhythm and beat (Fitch 2006). The dominant view lies at the group-level rather than at the individual-level, with music helping to promote group cohesion. This bonding effect of music may well be initialized in the mother-infant interactive pattern created through maternal singing.

Others consider that musical abilities have not been a target of natural selection but, instead, reflect an alternative use of more adaptive cognitive skills, such as language, auditory scene analysis and sensorimotor synchronization (Pinker 1997). This alternative proposition had already been expressed by William James, who thought that attraction toward music was “a mere incidental peculiarity of the nervous system” (cited in Langer 1942). A way to solve this debate is to examine the innateness, the domain-specificity and the human-specificity of some rhythmic traits in musical behaviours (as described briefly in the following sections) (Patel 2006). This approach presents the advantage to link evolutionary studies of music to empirical research, developed recently by investigating for instance the abilities to interact with musical rhythms in human infants, and in animals, and how these abilities overlap with other cognitive processes such as language or movement coordination.

4 Beat in Music: A Universal Feature?

Music always escapes definitions, probably because there are as many musical forms as musicians and listeners. It can be considered as a communication and signalling process such as language, but remains above all an artistic form of expression (Arom 2000). This implies that humans possess the capacity to ‘decontextualize’ the form of this expression and generate it independently of any context or signifier-signified constraints, in contrast with ordinary language (Arom 2000).

In line with these considerations, it is perfectly conceivable to find music that does not contain beat and metre, either because composers did not write their music by means of a periodic reference frame, and/or, because we do not perceive any beat when listening to these musical pieces (Patel 2008). As a proof of concept, one can ask individuals to move on such musical pieces. In music with no beat, the observed movements are not periodic, and often, these musical pieces do not entrain individuals to move spontaneously. Examples of music that do not contain a beat structure are found in the cantus planus from the medieval Gregorian tradition or in the melodic recitation of poems from the classic Persian tradition (Nelson 1985).

Hence, beat and metre do not constitute an obligatory ingredient of music, although this periodic reference frame is widely induced across musical genres and cultures. Actually, its use is likely to be related to the goal of musical expression. When music aims at conveying coordination across individuals, beat and metre are powerful means to improve it.

As one could expect, rhythm and beat in music has not been similarly developed across musical cultures (Pressing 2002). Some traditions, such as the black Atlantic music (i.e. the musical traditions originating from West Africa and their evolution across West African diasporas), have given to rhythmic aspects a prominent importance in their musical behaviours. Particularly, the black Atlantic music has developed a strong culture of groove in music, which refers to the urge to move in contact with music (Pressing 2002; Iyer 2002). The various musical features inducing a sense of groove are often found in funk, soul, hiphop, triphop, drum’n bass, house or jazz, i.e. music genres predominantly originating from the black Atlantic tradition (Witek 2012).

5 A Human-Specific Social Feature?

Surprisingly, animals as highly intelligent and close to humans as chimpanzees have never been shown to process musical beats, even in their most primitive forms and after training, whereas they can voluntarily produce rhythmic movements on a time scale appropriate for beat processing (Merchant and Honing 2014). Moreover, synchronization of movement to a musical beat is not commonly observed in domestic animals, such as dogs, that have lived with humans and their music for thousands of years (Fitch 2006).

To explain this issue, the vocal learning hypothesis has been proposed (Patel 2006). Vocal learning refers to the ability of animals to modify vocal signals as a result of experience with sounds usually produced by individuals of the same species. By extension, this definition has been restricted to cases where animals learn to mimic sounds that are not in their species repertoire. Recently, the evidence that specific species, such as parrots, presented abilities for beat processing has corroborated the vocal learning hypothesis: Patel et al. (2009) reported the case of one parrot exhibiting the ability to synchronize body movements with musical beat and to adjust the movements according to changes in the tempo. Hence, the fact that an animal could acquire the ability to process beat from music through training, while unnatural, would suggest that this ability is not part of a selective adaptation for music (Patel 2006).

Importantly, when searching for evidence of beat processing as an ecologically natural behaviour, Homo sapiens is the unique species manifesting spontaneous synchronization of periodic body movements to acoustic rhythms, engaging both sexes (Patel 2006). Moreover, this skill develops relatively early in human ontogeny, long before sexual maturity (Fitch 2006). Although vocal learning could provide the neural circuitry required for beat processing in music, this is perhaps not sufficient for spontaneous entrainment to musical sounds (Fitch 2006).

One possibility is that the propensity to engage in joint social action plays a crucial role in triggering and developing these rhythmic behaviours specific to music. This was suggested by the observation that young children improve their synchronization abilities when engaged in a joint action with an adult compared to a disembodied metronome, possibly through the building of a shared body representation and increased motivation (Kirschner and Tomasello 2009). Moreover, it has been shown that the groove in music is associated with positive affects in children and adults (Janata et al. 2012; Witek 2012), and that interpersonal synchrony, even in non-music contexts, increases affiliation (Hove and Risen 2009). Hence, music and dance can be considered as a powerful medium, alternative to speech, to inform on the physical ability and health, or to communicate recognizable emotions across individuals. The tight link between joint social action and musical behaviours would explain why such musical coordinated behaviours play an important role in collective work, rituals or war dance for example, widely across cultures (Hagen and Bryant 2003).

6 Nature Versus Nurture

The study of infants and their interaction with musical rhythms is also relevant to understand the biological basis of music perception and production. However, observations on children are particularly difficult to interpret because rhythm perception and production develop asymmetrically in childhood, due to the distinct maturation speeds of the systems responsible for motor output, for processing sensory input, etc. Also, while musical rhythm production is measured by capturing body movement in response to musical sounds, perception is assessed by indirect measures reflecting attention and familiarity of the infants with the presented musical sounds. Hence, the fundamental differences between the two measures make direct comparisons across the two aspects very limited (Hannon and Johnson 2005).

Already at 9 months, toddlers engage in significantly more rhythmic movement to music and other rhythmic sounds than to speech for instance, and exhibit to some extent tempo flexibility in their body movement in response to music (Zentner and Eerola 2010). Interestingly, while newborns and young infants may grasp basic aspect of rhythm and metre, their experience of listening within a given culture rapidly influences how they respond to such structures. Indeed, several studies have found that 6-months old Western children who have far less exposure to music than adults, are able to discriminate rhythmic disruption in rhythms containing complex metres such as 5-beat metres (complex metres being far less common in Western musical culture than simple metre such as 2-beat or 3-beat metres), whereas 12-month old Western children only discriminate rhythms having simple metres (Hannon and Trehub 2005). This suggests that culture-specific representations begin to emerge and affect behaviour between 6 and 12 months (Hannon and Trehub 2005).

7 Musical Rhythm as a Physical Input to the Human Auditory System

Musical sounds are acoustic stimuli that contain multiple temporal dimensions. They can be summarized in at least two components, the fine structure and the envelope, which are usual terms to describe waveforms in physics. In acoustics, the fine structure is determined by the fast air pressure variations or air vibrations reaching the ear. The processing of fine structure is involved in pitch perception, which can be defined as the perceptual phenomenon of sounds organized within a scale from low to high tones (Schnupp et al. 2010). The fine structure is itself modulated in amplitude, and the dynamics of this amplitude modulation constitute the sound envelope. In humans, amplitude modulations produce various hearing sensations depending on the modulation frequency. Rhythms, as well as most amplitude modulation frequencies found in ordinary speech for instance, correspond to envelope frequencies up to 20 Hz (i.e. 20 vibrations of air pressure per second) whereas pitch correspond to amplitude modulation frequencies above 20 Hz. How the human auditory system converts these complex acoustic inputs into a perceptual, subjective, representation of music remains a challenge for neuroscientists.

The human auditory system includes several anatomical relays that constitute the ascending auditory pathway. This pathway is described as ascending because it is responsible for the transmission of the sound information from the ear to the cortex. The cerebral cortex is the 2–4 mm thick outer layer composed of billions of cells (i.e. the neurons) at the surface of the human and other mammals’ brain. The cortex plays a key role in memory, attention, perceptual awareness, thought, language and other cognitive or motor coordination abilities.

The first internal representation of the sound is built within the cochlea, as the first relay of the ascending auditory pathway located within the ear. At the level of the cochlea, auditory cells respond to the sound envelope, as well as to the fine structure of sounds, by producing an electric signal in response to the sound waveforms. Along the ascending auditory pathway, the sound envelope information is transmitted from the cochlea to the cortex, through a similar principle of input-output transformation.

However, the various relays constituting the ascending auditory pathway do not merely respond to the sounds by faithfully reproducing the sounds waveform. There is increasing evidence that they act as complex processors analyzing and transforming the sound waveform into information relevant to behaviour. Taken in the context of the research on the perception of musical rhythms, the key question is how the human brain processes the air vibrations constituting musical rhythms to support our subjective feeling of beat and metre.

8 How to Explore Brain Responses to Rhythm in Neuroscience?

Research on the brain mechanisms of rhythm perception and production is in line with research in systems neuroscience, a subfield of neuroscience that studies the function of neural populations and networks, to understand how high-level mental functions such as perception or sensorimotor synchronization emerge from the interactions building these neural networks. To this aim, systems neuroscientists typically employ techniques measuring the neural function in vivo, such as electrophysiology, or functional neuroimaging scanners. Such functional neuroimaging techniques measure an aspect of brain function, often with a view to understanding the relationship between activity generated by certain brain areas and mental functions.

For instance, functional magnetic resonance imaging (fMRI) is a functional neuroimaging scanning procedure that measures brain activity by detecting associated changes in the brain blood flow. The technique relies on the fact that cerebral blood flow and neural activation are coupled. When an area of the brain is in relative use, blood flow of that region increases. Functional MRI can localize activity at the millimetre spatial scale but, using standard techniques, presents a limited temporal resolution of a few seconds. Several studies have investigated the brain responses to musical rhythms using fMRI. They found that rhythm perception recruits motor-related areas, even in the absence of overt movement, showing activity in brain areas such as the premotor cortex, cerebellum, supplementary motor area and basal ganglia (Schubotz et al. 2000; Grahn and Brett 2007; Chen et al. 2008).

Another way to sample brain activity is electroencephalography (EEG) (Fig. 1 in the next section). EEG records electric current fluctuations simultaneously generated by large groups of neurons, via multiple captors placed on the head of the participant. The analysis of the EEG signal can then focus on the neural responses to a single external input such as a sound or an image displayed on a computer screen. This is achieved by analyzing for instance the time course of these neural responses or by analyzing the spectral content of the EEG, that is, the type of neural oscillations that can be observed within the signal. In contrast with fMRI, the EEG signal recorded on the head lacks spatial resolution. Indeed, the potentials measured at a given scalp position are not systematically determined by the activity of the portion of cortex located immediately underneath the captor (Nunez and Srinivasan 2006). However, it offers the advantage of measuring neural activity at the millisecond time scale on how the brain responds to rhythm over time.
Fig. 1

EEG responses to rhythmic patterns

Several studies have previously explored beat and metre internal representations using EEG. For example, one approach has consisted in recording the brain responses elicited by a deviant sound inserted within a sequence of regular sounds played to induce the beat in the participant’s mind. However, this approach only allows capturing indirect evidence of internal entrainment to the beat, extrapolated from the brain responses to violations of the expected beat structure (Winkler et al. 2009). To capture the internal representation of the beat without artificially disrupting its internal representation, another approach has consisted in recording movements paced on the perceived beat (Repp 2005). However, this sensorimotor synchronization approach does not allow disentangling the constraints related to perception from those related to movement.

To try to overcome these limitations, we developed an approach built on the hypothesis that humans perceive the beat from music by synchronizing a large amount of neurons at the frequency of the beat, not only within the auditory system, but also in motor areas of the brain (Nozaradan 2014; Chemin et al. 2014). These neural activities pulsing in sync to the beat perceived when listening to music are captured with the EEG and identified by analyzing the spectrum of the EEG signal (Nozaradan et al. 2011, 2012a, b, 2015, 2016). This also explains why the approach is referred to as frequency-tagging (Regan 1989; Nozaradan 2014).

9 EEG Frequency-Tagging to Explore the Entrainment to Musical Rhythms

In a recent experiment, we recorded with EEG the brain activity elicited without moving, in response to a perceived and imagined beat (Nozaradan et al. 2011). We asked eight healthy participants to listen to a regular metronomic sound sequence of less than a minute, and to voluntarily imagine a metre on these metronomic sounds as either binary or ternary. In other words, the participants were asked to imagine the sounds as grouped by two or by three, as in a march or a waltz respectively. After collecting a few repetitions of this procedure, we could observe in the EEG signal that this mental imagery of metre voluntarily imposed onto the sounds without moving was related to the emergence of neural activities at frequencies corresponding exactly to the perceived and imagined beat and metre. This robust synchronization, or entrainment, of the brain activity at beat and metre frequencies could constitute the actual support of how the brain builds a mental representation of beat and metre.

Another EEG experiment was conducted in which participants listened to rhythmic patterns. These rhythms are known to induce the perception of beat and metre, and are commonly found in Western compositions. As represented in Fig. 1 (adapted from Nozaradan et al. 2012b), these rhythmic patterns consisted in sequences of short tones (here represented by crosses) and silences (represented by dots). The vertical arrows indicate the places in the patterns where the beat was perceived by the participants (i.e. the periodic time points on which they would clap the hand or tap the foot to the rhythm). Note that the pattern presented on the right in Fig. 1 can be considered as syncopated, as some beats occur on silences rather than sounds. The pattern presented on the left in contrast is considered as unsyncopated, as all perceived beats coincide with sounds rather than silences. The second line of Fig. 1 represents the frequency spectrum of the sound envelope. The beat frequency is indicated by the vertical arrow for each rhythm. Note that in the pattern shown on the right, the beat frequency (at 1.25 Hz) is not as prominent as in the pattern presented on the left. The third line in Fig. 1 represents the frequency spectrum of the scalp surface EEG as recorded while the participants listened to these repeated patterns (group-level average from nine participants, and averaged across the 64 EEG captors, as in the picture on the upper right). Finally, the bottom line in Fig. 1 corresponds to the frequency spectrum of the intracerebral EEG as recorded directly within the human auditory cortex (group level average from eight patients implanted with intracranial depth-electrodes for the treatment of intractable epilepsy, and averaged across the captors of an electrode implanted within the Heschl’s gyrus, or human primary auditory cortex, as in the picture on the bottom right).

Importantly, we observed that the brain activity was selectively enhanced at beat frequency (pointed by the arrows in Fig. 1). That is, among the multiple peaks of neural activities elicited in response to these complex rhythms, the peaks of neural activity at frequencies corresponding to the perceived beat were selectively amplified in the EEG signal, compared to the neural activity elicited by frequencies contained in the rhythmic patterns that were unrelated to the beat. This selective enhancement occurred even when the beat frequency was not prominant in the rhythm spectrum, as in the pattern on the right of Fig. 1. Moreover, this relative enhancement of the neural activity at perceived beat and metre frequencies was disrupted when playing the rhythmic patterns four times faster, such as to play the same rhythm at a tempo much faster than the common range of tempo in music. This suggests that the brain actually transforms the rhythmic input by amplifying some frequencies that are relevant for perception and behaviour. Moreover, this study illustrates how this methodology allows measuring objectively the transformation between the rhythmic input and the neural response.

In addition, we investigated one of the most fascinating aspects of musical rhythm: its strong relationship with movement. On the one hand music spontaneously entrains humans to move (Janata et al. 2012; Phillips-Silver et al. 2010). On the other hand, movement influences the perception of musical rhythms, already in infants (Phillips-Silver and Trainor 2005, 2007). The EEG frequency-tagging approach can help understanding how neural representations of rhythm are shaped by movement in humans. The EEG was recorded while healthy participants listened to a rhythm, before and after a body movement training session (Chemin et al. 2014). This movement training of a dozen of minutes consisted in moving the body (i.e. clapping the hand, bouncing the head, moving the torso, etc.), jointly with the investigator, according to a given metric interpretation of the rhythm. We found that the brain responses to the rhythm as recorded with EEG after body movement was significantly enhanced at frequencies related to the metre to which the participants had moved, even though they did not move or focus attention on the metric structure during the EEG recording. These results provide evidence that body movement can selectively shape the subsequent neural representation of auditory rhythms. In other words, moving the body can directly shape how our brain processes a rhythm, revealing the flexibility of our own mental representation of a rhythm.

10 Discussion: Bridging the Neuroscience of Musical Rhythm with Models of Empirical Aesthetics

We have briefly seen how the brain responses to musical rhythm can be sampled and related to our mental representation of musical rhythm. We measured these responses in the form of peaks of EEG activity elicited at the exact frequency of the perceived beat when listening to a musical rhythm. Based on the results of these experiments, we propose a model of inverted U-curve, as schematized in Fig. 2. In this model, the x-axis represents the metrical complexity of the musical rhythm from which a beat has to be extracted by the participant, and the y-axis represents the brain response to the beat. The latter is measured by EEG frequency-tagging as the relative amplitude of the neural activity at beat and metre frequencies (μV for microvolt, as the unit of amplitude of the brain activity measured with EEG).
Fig. 2

The inverted U-curve proposed to relate the complexity of musical rhythm to the neural entrainment to the beat

According to this view, the rhythm that stands in the left part of the curve, i.e. the most basic rhythm that could be imagined in term of beat extraction, could correspond for instance to a metronomic sound (as in Nozaradan et al. 2011). This case is considered the most basic because no periodic beat has to be extracted by the participant’s mind, since the periodicity is given by the periodic sound itself. The top of the inverted U-curve corresponds to rhythms that are not metronomic but from which the listener can extract a beat quite easily despite the complexity of the rhythm structure. This case could correspond to rhythms such as those represented in Fig. 1 (left part of the figure; see also Nozaradan et al. 2012b). In this case, the neural entrainment at beat frequency would be higher in amplitude than the neural activity measured in response to a metronomic sound. In other words, these rhythms, that are more challenging for the listener’s mind due to their relatively complex structure, are thought to recruit a substantial amount of brain processing to grasp and maintain a periodic temporal structure of beat. However, it is a matter of compromise between the ease to extract the beat on the one hand and the degree of complexity on the other hand. Indeed, the rhythm that stands at the right part of the curve is in contrast too complex to induce a spontaneous perception of beat. In this case, it is still possible to measure peaks of brain activity in response to the sound envelope, but these peaks of activity are relatively lower in amplitude and are not selectively amplified compared to the peaks of neural activity elicited by the sounds that are not relevant for beat perception (Nozaradan et al. 2012b). This is the case for a random rhythm, or for a rhythm played too fast or too slow, i.e. at a tempo lying away from the specific tempo range for beat perception (Nozaradan et al. 2012b).

This model of inverted U-curve remains speculative and has to be tested more systematically. Also, how complexity is measured remains critical to move this model forward. Interestingly, this question has already been raised by researchers having related aesthetic response with complexity using a similar inverted U-curve (McDermott 2012). According to this view, stimuli that are too simple or too complex are not aesthetically pleasing, but somewhere in the middle lies an optimum (Berlyne 1971). In fact, an inverted U-curve model relating complexity and aesthetic pleasantness is at least partly consistent with the intuition that something that is too repetitive is boring, while something that is completely random is impossible to grasp to build a mental representation, and thus cannot induce aesthetic pleasure (McDermott 2012).

Taken in the context of musical rhythm, the results of the studies summarized here show how the neural entrainment to musical rhythms is a nice example of biological rules shaping music perception, by constraining the brain responses to the beat within a given range of rhythmic complexity. However, these limits are not fixed. For instance, we have shown that movement training on a rhythm can selectively enhance the brain responses to this rhythm (Chemin et al. 2014). Aside this short-term flexibility, musical practice along the life span could also induce long-term changes in the brain responses to musical rhythm. Our EEG frequency-tagging approach could help clarifying this issue, by comparing the brain responses to musical rhythm in musicians vs. non-musicians. Moreover, to explore the biological foundations of beat and metre properly, it is important to be aware of the diversity encountered across cultures regarding the rhythmic material and metrical forms. As briefly seen in the previous sections, rhythm has not been similarly developed across musical cultures (Pressing 2002). The EEG frequency-tagging approach could also help addressing some of the questions pertaining to cross-cultural differences in beat induction. Finally, this EEG frequency-tagging approach could be used to sample the brain responses to musical rhythms using comparable experimental designs in healthy adults and in infants or even in animals for instance, as it does not require to produce concomitant overt movement. This would allow us to observe possible shift of the inverted U-curve towards less rhythmic complexity depending on factors such as the age of the tested participants or even the species (Fig. 2).

Hence, aside of the biological constraints, four aspects at least could be responsible for shaping the brain responses to rhythms (and to music in general): familiarity, culture, training and context. Many people have experienced the situation of finding a piece of music relatively unrewarding upon first listen, but coming to love it with repeated listens. Moreover, people tend to prefer to listen to the music of their own culture, and often find the music of foreign cultures uninteresting by comparison (McDermott 2012). Also, there is evidence that expertise reduces in general the influence of complexity on preferences (McDermott 2012). Finally, the context of the listening can dramatically shape our preference for a piece of music. For instance, social contexts loom large, as people use music to project an identity (North et al. 2000). However, enjoyment of music is also determined by what we are experiencing at the moment of the listening.

Taken together, the literature reviewed here illustrates how music constitutes a rich framework to explore the phenomenon of entrainment at the level of neural networks and its involvement in dynamic sensorimotor and cognitive processing. It also suggest that the EEG frequency-tagging approach is well-suited to assess the neural entrainment to musical rhythm within various contexts. More generally, these studies illustrate how exploring musical rhythm perception constitutes a unique opportunity to gain insight into the general mechanisms of entrainment at different scales, from neural systems to entire bodies, and into the vivid interaction between biological and cultural constraints.

Notes

Acknowledgements

The author is supported by the Australian Research Council (DE160101064).

References

  1. Arom, S. (2000). Prolegomena to a biomusicology. In N. L. Wallin, B. Merker, & S. Brown (Eds.), The origins of music (pp. 27–29). Cambridge, MA: MIT Press.Google Scholar
  2. Berlyne, D. E. (1971). Aesthetics and psychobiology. New York: Appleton.Google Scholar
  3. Blood, A. J., & Zatorre, R. J. (2001). Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion. Proceedings of the National Academy of Sciences, 98(20), 11818–11823.CrossRefGoogle Scholar
  4. Bruner, G. C. (1990). Music, mood, and marketing. Journal of Marketing, 54(4), 94–104.CrossRefGoogle Scholar
  5. Chemin, B., Mouraux, A., & Nozaradan, S. (2014). Body movement shapes selectively the neural representation of musical rhythms. Psychological Science, 25(12), 2147–2159.CrossRefGoogle Scholar
  6. Chen, J. L., Penhune, V. B., & Zatorre, R. J. (2008). Listening to musical rhythms recruits motor regions of the brain. Cerebral Cortex, 18, 2844–2854.CrossRefGoogle Scholar
  7. Fitch, W. T. (2006). The biology and evolution of music: A comparative perspective. Cognition, 100, 173–215.CrossRefGoogle Scholar
  8. Grahn, J. A. (2012). Neural mechanisms of rhythm perception: Current findings and future perspectives. Topics in Cognitive Science, 4(4), 585–606.CrossRefGoogle Scholar
  9. Grahn, J. A., & Brett, M. (2007). Rhythm and beat perception in motor areas of the brain. Journal of Cognitive Neuroscience, 19, 893–906.CrossRefGoogle Scholar
  10. Hagen, E. H., & Bryant, G. A. (2003). Music and dance as a coalition signaling system. Human Nature, 14, 21–51.CrossRefGoogle Scholar
  11. Hannon, E. E., & Johnson, S. P. (2005). Infants use meter to categorize rhythms and melodies: Implications for musical structure learning. Cognitive Psychology, 50(4), 354–377.CrossRefGoogle Scholar
  12. Hannon, E. E., & Trehub, S. E. (2005). Metrical categories in infancy and adulthood. Psychological Science, 16(1), 48–55.CrossRefGoogle Scholar
  13. Hove, M. J., & Risen, J. L. (2009). It’s all in the timing: Interpersonal synchrony increases affiliation. Social Cognition, 27(6), 949–961.CrossRefGoogle Scholar
  14. Iyer, V. (2002). Embodied mind, situated cognition, and expressive microtiming in African-American music. Music Perception, 19(3), 387–414.CrossRefGoogle Scholar
  15. Janata, P., Tomic, S. T., & Haberman, J. M. (2012). Sensorimotor coupling in music and the psychology of the groove. Journal of Experimental Psychology General, 141(1), 54–75.CrossRefGoogle Scholar
  16. Kirschner, S., & Tomasello, M. (2009). Joint drumming: Social context facilitates synchronization in preschool children. Journal of Experimental Child Psychology, 102(3), 299–314.CrossRefGoogle Scholar
  17. Langer, S. (1942). Philosophy in a new key. Cambridge, MA: Harvard University Press.Google Scholar
  18. Leman, M. (2008). Embodied music and mediation technology. Cambridge, MA: MIT Press.Google Scholar
  19. London, J. (2004). Hearing in time: Psychological aspects of musical meter. London: Oxford University Press.CrossRefGoogle Scholar
  20. McDermott, J. H. (2012). Auditory preferences and aesthetics: Music, voices, and everyday sounds. In R. Sharot & T. Dolan (Eds.), Neuroscience of preference and choice (pp. 227–256). San Diego: Academic Press.CrossRefGoogle Scholar
  21. Merchant, H., & Honing, H. (2014). Are non-human primates capable of rhythmic entrainment? Evidence for the gradual audiomotor evolution hypothesis. Frontiers in Neuroscience, 17(7), 274.Google Scholar
  22. Nelson, K. (1985). The art of reciting the Qur’an. Austin: University of Texas Press.Google Scholar
  23. North, A. C., Hargreaves, D. J., & O’Neill, S. A. (2000). The importance of music to adolescents. British Journal of Educational Psychology, 70, 255–272.CrossRefGoogle Scholar
  24. North, A. C., Shilcock, A., & Hargreaves, D. J. (2003). The effect of musical style on restaurant customers’ spending. Environment and Behavior, 35, 712–718.CrossRefGoogle Scholar
  25. Nozaradan, S. (2014). Exploring how musical rhythm entrains brain activity with electroencephalogram frequency-tagging. Philosophical Transaction B, 369(1658), 20130393.CrossRefGoogle Scholar
  26. Nozaradan, S., Peretz, I., & Keller, P. E. (2016). Individual differences in rhythmic cortical entrainment correlate with predictive behavior in sensorimotor synchronization. Scientific Reports, 6, 20612. doi: 10.1038/srep20612.CrossRefGoogle Scholar
  27. Nozaradan, S., Peretz, I., Missal, M., & Mouraux, M. (2011). Tagging the neuronal entrainment to beat and meter. The Journal of Neuroscience, 31, 10234–10240.CrossRefGoogle Scholar
  28. Nozaradan, S., Peretz, I., & Mouraux, A. (2012a). Selective neuronal entrainment to beat and meter embedded in a musical rhythm. The Journal of Neuroscience, 32, 17572–17581.CrossRefGoogle Scholar
  29. Nozaradan, S., Peretz, I., & Mouraux, A. (2012b). Steady-state evoked potentials as an index of multisensory temporal binding. NeuroImage, 60, 21–28.CrossRefGoogle Scholar
  30. Nozaradan, S., Zerouali, Y., Peretz, I., & Mouraux, A. (2015). Capturing with EEG the neuronal entrainment and coupling underlying sensorimotor integration while moving to the beat. Cerebral Cortex, 25(3), 736–747.CrossRefGoogle Scholar
  31. Nunez, P. L., & Srinivasan, R. (2006). Electric fields of the brain: The neurophysics of EEG (2nd ed.). New York: Oxford University Press.CrossRefGoogle Scholar
  32. Patel, A. D. (2006). Musical rhythm, linguistic rhythm, and human evolution. Music Perception, 24, 99–104.CrossRefGoogle Scholar
  33. Patel, A. D. (2008). Music, language, and the brain. New York: Oxford University Press.Google Scholar
  34. Patel, A. D., Iversen, J. R., Bregman, M. R., & Schulz, I. (2009). Experimental evidence for synchronization to a musical beat in a nonhuman animal. Current Biology, 19, 827–830.CrossRefGoogle Scholar
  35. Peretz, I., & Zatorre, R. J. (Eds.). (2003). The cognitive neuroscience of music. New York: Oxford University Press.Google Scholar
  36. Phillips-Silver, J., Aktipis, C. A., & Bryant, G. A. (2010). The ecology of entrainment: Foundations of coordinated rhythmic movement. Music Perception, 28, 3–14.CrossRefGoogle Scholar
  37. Phillips-Silver, J., & Keller, P. E. (2012). Searching for roots of entrainment and joint action in early musical interactions. Frontiers in Human Neuroscience, 6, 26.CrossRefGoogle Scholar
  38. Phillips-Silver, J., & Trainor, L. J. (2005). Feeling the beat: Movement influences infant rhythm perception. Science, 308(5727), 1430–1430.CrossRefGoogle Scholar
  39. Phillips-Silver, J., & Trainor, L. J. (2007). Hearing what the body feels: Auditory encoding of rhythmic movement. Cognition, 105, 533–546.CrossRefGoogle Scholar
  40. Pinker, S. (1997). How the mind works. New York: Norton.Google Scholar
  41. Pressing, J. (2002). Black Atlantic rhythm: Its computational and transcultural foundations. Music Perception, 19(3), 285–310.CrossRefGoogle Scholar
  42. Regan, D. (1989). Human brain electrophysiology: Evoked potentials and evoked magnetic fields in science and medicine. New York: Elsevier.Google Scholar
  43. Repp, B. H. (2005). Sensorimotor synchronization: A review of the tapping literature. Psychonomic Bulletin and Review, 12, 969–992.CrossRefGoogle Scholar
  44. Sacks, O. (2008). Musicophilia: Tales of music and the brain. New York: Vintage Books.Google Scholar
  45. Salimpoor, V. N., Benovoy, M., Larcher, K., Dagher, A., & Zatorre, R. J. (2011). Anatomically distinct dopamine release during anticipation and experience of peak emotion to music. Nature Neuroscience, 14(2), 257–262.CrossRefGoogle Scholar
  46. Schnupp, J., Nelken, I., & King, A. (2010). Auditory neuroscience: Making sense of sound. Cambridge, MA: The MIT Press.Google Scholar
  47. Schubotz, R. I., Friederici, A. D., & von Cramon, D. Y. (2000). Time perception and motor timing: A common cortical and subcortical basis revealed by fMRI. NeuroImage, 11(1), 1–12.CrossRefGoogle Scholar
  48. Todd, N. P., Lee, C. S., & O’Boyle, D. J. (2002). A sensorimotor theory of temporal tracking and beat induction. Psychological Research, 66(1), 26–39.CrossRefGoogle Scholar
  49. Van Noorden, L., & Moelants, D. (1999). Resonance in the perception of musical pulse. Journal of New Music Research, 28, 43–66.CrossRefGoogle Scholar
  50. Wallin, N. L., Merker, B., & Brown, S. (Eds.). (2000). The origins of music. Cambridge, MA: MIT Press.Google Scholar
  51. Winkler, I., Háden, G. P., Ladinig, O., Sziller, I., & Honing, H. (2009). Newborn infants detect the beat in music. Proceedings of the National Academy of Sciences, 106(7), 2468–2471.CrossRefGoogle Scholar
  52. Witek, M. (2012). Groove experience: Emotional and physiological responses to groove-based music. Proceedings of the 7th Triennial Conference of European Society for the Cognitive Sciences of Music (ESCOM 2009), Jyväskylä, Finland.Google Scholar
  53. Zatorre, R. J., Chen, J. L., & Penhune, V. B. (2007). When the brain plays music: Auditory-motor interactions in music perception and production. Nature Reviews Neuroscience, 8(7), 547–558.CrossRefGoogle Scholar
  54. Zentner, M., & Eerola, T. (2010). Rhythmic engagement with music in infancy. Proceedings of the National Academy of Sciences, 107(13), 5768–5773.CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.MARCS InstituteWestern Sydney University (WSU)PenrithAustralia
  2. 2.Institute of Neuroscience (Ions)Université catholique de Louvain (UCL)BrusselsBelgium
  3. 3.International Laboratory for Brain, Music and Sound Research (BRAMS)MontrealCanada

Personalised recommendations