The alignments and crosslinks obtained from audio synchronization (Chap. 5) and audio matching (Chap. 6) can be used to conveniently switch between different versions of a piece of music (interdocument navigation). We will now address the problem of audio structure analysis, which lays the basis for intradocument navigation. One major goal of the structural analysis of an audio recording is to automatically extract the repetitive structure or, more generally, the musical form of the underlying piece of music. Recent approaches such as [14,37,46,49,81,127,129,139,161]work well for music where the repetitions largely agree with respect to instrumentation and tempo as it is typically the case for popular music. For other classes of music including Western classical music, however, musically similar audio segments may exhibit significant variations in parameters such as dynamics, timbre, execution of note groups, musical key, articulation, and tempo progression. In this chapter, we propose robust and efficient algorithms for structure analysis that identifies musically similar segments. To obtain a flexible and robust algorithm, the idea is to simultaneously account for possible variations at various stages and levels. At the feature level, we use coarse chroma-based audio features that absorb microvariations. To cope with local variations, we design an advanced cost measure by integrating contextual information (Sect. 7.2). Finally, we describe a new strategy for structure extraction that can cope with more global variations (Sects. 7.3 and 7.4). Our experimental results with classical and popular music show that our algorithm performs successfully even in the presence of significant musical variations (Sect. 7.5). In Sect. 7.1, we start by summarizing a general strategy for audio structure analysis and introduce some notation that is used throughout this chapter. Related work and future research directions will be discussed in Sect. 7.6. In this chapter, we closely follow [139]. The enhancement strategy of self-similarity matrices by introducing a contextual local cost measure has first been described in [137].
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Rights and permissions
Copyright information
© 2007 Springer-VerlagBerlinHeidelberg
About this chapter
Cite this chapter
(2007). Audio Structure Analysis. In: Information Retrieval for Music and Motion. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-74048-3_7
Download citation
DOI: https://doi.org/10.1007/978-3-540-74048-3_7
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-74047-6
Online ISBN: 978-3-540-74048-3
eBook Packages: Computer ScienceComputer Science (R0)