Advertisement

Latent Dirichlet Allocation Model for Recognizing Emotion from Music

  • S. Arulheethayadharthani
  • Rajeswari Sridhar
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 174)

Abstract

The recognition of emotion has become a multi-disciplinary research area that has received great interest. Recognizing emotion of audio data will be useful for content-based searching, mood detection etc. The goal of this paper is to elaborate a system that automatically recognizes the emotion of the music. We present a technique used for document classification, Latent Dirichlet allocation (LDA) for the purpose of identifying emotion from music. The recognition process consists of three steps. In the first step, extractions of ten distinct features from music are performed followed by Clustering of values of these features, and finally in the third step an LDA model for each of the emotions is constructed. After constructing the LDA the emotion of the given music is identified. This model was tested on South Indian film music to recognize 6 emotions happy, sad, angry, love, disgust, fear and achieved an average accuracy of 80%.

Keywords

Emotion Recognition Latent Dirichlet Allocation Latent Dirichlet Allocation Model Music Information Retrieval Music Signal 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Wieczorkowska, A., Synak, P., Lewis, R., Ras, Z.: Extracting Emotion from Music Data. In: IEEE ICDM 2006 (2006)Google Scholar
  2. 2.
    Han, B.-J., Rho, S., Dannenberg, R.B., Hwang, E.: SMERS: Music Emotion Recognition using Support Vector Regression. In: 10th International Society for Music Information Retrieval Conference 2009, pp. 651–656 (2009)Google Scholar
  3. 3.
    Kulkarni, A., Iyer, D., Sridharan, S.R.: Audio Segmentation. In: IEEE, International Conference on Data Mining, ICDM (2001)Google Scholar
  4. 4.
    Lu, Q., Chen, X., Yang, D., Wang, J.: Boosting for multi-modal music emotionclassification. In: 11th International Society for Music Information Retrieval Conference (ISMIR 2010), pp. 105–110 (2010)Google Scholar
  5. 5.
    Schmidt, E.M., Turnbull, D., Kim, Y.E.: Feature Selection for Content-Based, Time-Varying Musical Emotion Regression. In: International Conference on Data Mining (ICDM), pp. 267–273 (2010)Google Scholar
  6. 6.
    Hu, D.J.: Latent Dirichlet Allocation for Text, Images and Music. University of California, San Diego (2009)Google Scholar
  7. 7.
    Blei, D.M., Ng, A.Y., Jordon, M.I.: Latent Dirichlet Allocation. University of Californiya, Berkeley (2009)Google Scholar
  8. 8.
    Sridhar, R., Subramanian, M., Lavanya, B.M., Malinidevi, B., Geetha, T.V.: Latent Dirichlet Allocation Model for raga identification of Carnatic Music. Journal of Computer Science, 1711–1716 (2011)Google Scholar

Copyright information

© Springer India 2013

Authors and Affiliations

  1. 1.Anna UniversityChennaiIndia

Personalised recommendations