Abstract
This chapter presents the process of building emotion maps of musical compositions. In our approach, emotion recognition was treated as a regression problem and a two-dimensional valence-arousal model was used to measure emotions. Conducting experiments required the construction of regressors, attribute selection, and analysis of selected musical compositions. We also examined the influence of different feature sets—low-level, rhythm, tonal, and their combination—on arousal and valence prediction. The use of a combination of different types of features significantly improves the results compared with using just one group of features. We found and presented features particularly dedicated to the detection of arousal and valence separately, as well as features useful in both cases. The obtained emotion maps provide new knowledge about the distribution of emotions in an examined audio recording. They reveal new knowledge that had only been available to music experts until this point. We propose the features for analyzing and comparing changes in arousal and valence over time and use them to compare selected well-known Ludwig van Beethoven’s Sonatas with several of the most famous songs by The Beatles.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG
About this chapter
Cite this chapter
Grekow, J. (2018). Music Emotion Maps in the Arousal-Valence Space. In: From Content-based Music Emotion Recognition to Emotion Maps of Musical Pieces. Studies in Computational Intelligence, vol 747. Springer, Cham. https://doi.org/10.1007/978-3-319-70609-2_9
Download citation
DOI: https://doi.org/10.1007/978-3-319-70609-2_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-70608-5
Online ISBN: 978-3-319-70609-2
eBook Packages: EngineeringEngineering (R0)