The Pursuit of Happiness in Music: Retrieving Valence with Contextual Music Descriptors
In the study of music emotions, Valence is usually referred to as one of the dimensions of the circumplex model of emotions that describes music appraisal of happiness, whose scale goes from sad to happy. Nevertheless, related literature shows that Valence is known as being particularly difficult to be predicted by a computational model. As Valence is a contextual music feature, it is assumed here that its prediction should also require contextual music descriptors in its predicting model. This work describes the usage of eight contextual (also known as higher-level) descriptors, previously developed by us, to calculate happiness in music. Each of these descriptors was independently tested using the correlation coefficient of its prediction with the mean rating of Valence, reckoned by thirty-five listeners, over a piece of music. Following, a linear model using this eight descriptors was created and the result of its prediction, for the same piece of music, is described and compared with two other computational models from the literature, designed for the dynamic prediction of music emotion. Finally it is proposed here an initial investigation on the effects of expressive performance and musical structure on the prediction of Valence. Our descriptors are then separated in two groups: performance and structural, where, with each group, we built a linear model. The prediction of Valence given by these two models, over two other pieces of music, are here compared with the correspondent listeners’ mean rating of Valence, and the achieved results are depicted, described and discussed.
Keywordsmusic information retrieval music cognition music emotion
Unable to display preview. Download preview PDF.
- 1.Sloboda, J.A., Juslin, P.: Music and Emotion: Theory and Research. Oxford University Press, Oxford (2001)Google Scholar
- 3.Juslin, P.N., Laukka, P.: Communication of emotions in vocal expression and music performance: Different channels, same code? Psychological Bulletin (129), 770–814 (2003)Google Scholar
- 6.Scherer, K.R., Zentner, K.R.: Emotional effects of music: production rules. In: Juslin, P.N., Sloboda, J.A. (eds.) Music and emotion: Theory and research, pp. 361–392. Oxford University Press, Oxford (2001)Google Scholar
- 9.Wu, T.-L., Jeng, S.-K.: Automatic emotion classification of musical segments. In: Proceedings of the 9th International Conference on Music Perception & Cognition, Bologna (2006)Google Scholar
- 10.Gomez, E., Herrera, P.: Estimating The Tonality Of Polyphonic Audio Files: Cogtive Versus Machine Learning Modelling StrategiesI. Paper presented at the Proceedings of the 5th International ISMIR 2004 Conference, Barcelona, Spain (October 2004)Google Scholar
- 15.Gerhard, W., Werner, G.: Computational Models of Expressive Music Performance: The State of the Art. Journal of New Music Research 2004 33(3), 203–216 (2004)Google Scholar
- 16.Friberg, A., Bresin, R., Sundberg, J.: Overview of the KTH rule system for music performance. Advances in Experimental Psychology, special issue on Music Performance 2(2-3), 145–161 (2006)Google Scholar
- 19.Widmer, G., Dixon, S.E., Goebl, W., Pampalk, E., Tobudic, A.: Search of the Horowitz factor. AI Magazine 24, 111–130 (2003)Google Scholar
- 25.BeeSuan, O.: Towards Automatic Music Structural Analysis: Identifying Characteristic Within-Song Excerpts in Popular Music. Doctorate dissertation. Department of Technology, University Pompeu Fabra (2005)Google Scholar