Emotion-Based Hindi Music Classification

  • Deepti ChaudharyEmail author
  • Niraj Pratap Singh
  • Sachin Singh
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1164)


Music emotion detection is becoming a vast and challenging field of research with the increase in digital music clips available online. Emotion can be considered as energy that brings the person in positive or negative motion. Music emotion recognition (MER) is an emerging field of research in various areas such as engineering, medical, psychology, musicology. In this work, the music signals are divided into four categories—excited, calm, sad and frustrated. Feature extraction is carried out by using MIR toolbox. The classification is done by using K-nearest neighbor (K-NN) and support vector machine (SVM). The feature-wise accuracy of both the approaches is compared by considering mean and standard deviation of feature vectors. Results reveal that spectral features provide the maximum accuracy among all the features and SVM outperforms K-NN. Maximum accuracy achieved by SVM is 72.5% and K-NN is 71.8% for spectral features. If all the features are considered, then the accuracy achieved by SVM is 75% and by K-NN is 73.8%.


Music emotion recognition Support vector machine K-nearest neighbor Human–computer interaction MIR toolbox 


  1. 1.
    C.C. Prat, Music as the language of emotion. Lecture delivered in Whittall Pavilion, The Library of Congress 21 Dec 1952Google Scholar
  2. 2.
    Y.-H. Yang, Y.-C. Lin, Y.-F. Su, H. H. Chen, A regression approach to music emotion recognition. IEEE Trans. AUDIO, SPEECH, Lang. Process. 16(2), 448–457 (2008)Google Scholar
  3. 3.
    Y.-H. Yang, Y.-F. Su, Y.-C. Lin, H. H. Chen, Music Emotion Recognition (Boca Raton, CRC Press, 2 February 2011)Google Scholar
  4. 4.
    B. Bischke, P. Helber, C. Schulze, V. Srinivasan, A. Dengel, D. Borth, The multimedia satellite task at mediaeval 2017: emergency response for flooding events, in CEUR Workshop Proceedings (Dublin, Ireland, 13–15 September 2017)Google Scholar
  5. 5.
    S. Koelstra et al., DEAP: A database for emotion analysis; Using physiological signals. IEEE Trans. Affect. Comput. 3(1), 18–31 (2012)CrossRefGoogle Scholar
  6. 6.
    S.-Y. Wang. J.-C. Wang. Y.-H. Y. Wang, H.-M. Wang, Towards time-varying music auto-tagging based on CAL500 expansion, in International Conference on Multimedia and Expo (Chengdu, China, 14–18 July 2014)Google Scholar
  7. 7.
    D. Su, P. Fung, These words are music to my ears: recognizing music emotion from lyrics using AdaBoost, in Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (Kaohsiung, Taiwan, 29 Oct–1 Nov 2013)Google Scholar
  8. 8.
    A. Shakya, B. Gurung, M. S. Thapa, M. Rai, Music classification based on genre and mood, in International Conference on Computational Intelligence, Communications and Bussiness Analytics, CICBA 2017. Communications in Computer and Information Science, vol. 776, (Springer, Singapore), pp. 168–183Google Scholar
  9. 9.
    J. Grekow, Audio features dedicated to the detection of arousal and valence in music recordings, in IEEE International Conference on Innovations in Intelligent SysTems and Applications (Gdynia, Poland, 3–5 July 2017)Google Scholar
  10. 10.
    D. Cabrera, S. Ferguson, E. Schubert, Psysound3: Software for Acoustical and Psychoacoustical Analysis of Sound Recordings (‘PSYSOUND3’: software for acoustical and Empirical Musicology Group, University of New South Wales, NSW 2052, Australia, May 2014, 2002)Google Scholar
  11. 11.
    G. Tzanetakis, P. Cook, MARSYAS: a framework for audio analysis. Organised Sound 4(3), 169–175 (2000)CrossRefGoogle Scholar
  12. 12.
    P.T. Olivier Lartillot, A Matlab toolbox for musical feature extraction from audio, in International Conference on Digital Audio Effects (Bordeaux, France, 10–15 Sept 2007)Google Scholar
  13. 13.
    X. Hu, A framework for evaluating multimodal music mood classification. J. Assoc. Inf. Sci. Technol. 68(2), 273–285 (2017)CrossRefGoogle Scholar
  14. 14.
    G. Tzanetakis, P. Cook, Musical genre classification of audio signals. IEEE Trans. Speech Audio Process. 10(5), 293–302 (2002)CrossRefGoogle Scholar
  15. 15.
    Y. Feng, Y. Zhuang, and Y. Pan, Popular music retrieval by detecting mood, in Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (Toronto, Canada, 28 July–1 Aug 2003), pp. 375–376Google Scholar
  16. 16.
    W. Muyuan, Z. Naiyao, Z. Hancheng, M. Wang, N. Zhang, H. Zhu, User-adaptive music emotion recognition, in Proceedings of the 7th International Conference on Signal Processing, 2004 (Beijing, China, 31 Aug–4 Sept 2004)Google Scholar
  17. 17.
    L. Lu, D. Liu, H. J. Zhang, Automatic mood detection and tracking of music audio signals. IEEE Trans. Audio, Speech Lang. Process. 14(1), 5–18 (2006)Google Scholar
  18. 18.
    P. Saari, T. Eerola, O. Lartillot, Generalizability and simplicity as criteria in feature selection: application to mood classification in music. IEEE Trans. Audio, Speech Lang. Process. 19(6), 1802–1812 (2011)Google Scholar
  19. 19.
    J. Wang, Y. Yang, H. Wang, S. Jeng, Personalized music emotion recognition via model adaptation, in Asia Pacific Signal and Information Processing Association Annual Summit and Conference (Hollywood, CA, USA, 3–6 Dec 2012)Google Scholar
  20. 20.
    H. Ahsan, V. Kumar, C.V. Jawahar, Multi-label annotation of music, in 8th International Conference on Advances in Pattern Recognition (Kolkata, India, 4–7 Jan 2015)Google Scholar
  21. 21.
    X. Hu, Y.H. Yang, Cross-dataset and cross-cultural music mood prediction: a case on western and chinese pop songs. IEEE Trans. Affect. Comput. 8(2), 228–240 (2017)CrossRefGoogle Scholar
  22. 22.
    S. Mo, J. Niu, A novel method based on OMPGW method for feature extraction in automatic music mood classification. IEEE Trans. Affect. Comput. 3045 (2017)Google Scholar
  23. 23.
    B.G. Patra, D. Das, S. Bandyopadhyay, Multimodal mood classification of Hindi and Western songs. J. Intell. Inf. Syst. 1–18 (2018)Google Scholar
  24. 24.
    C.W. Hsu, C.C. Chang, C.J. Lin, A practical guide to support vector classification. BJU Int. 101(1), 396–400 (2008)Google Scholar
  25. 25.
    C. Cortes, C. Cortes, V. Vapnik, V. Vapnik, Support vector networks. Mach. Learn. 20(3), 273–297 (1995)zbMATHGoogle Scholar

Copyright information

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021

Authors and Affiliations

  • Deepti Chaudhary
    • 1
    • 2
    Email author
  • Niraj Pratap Singh
    • 1
  • Sachin Singh
    • 3
  1. 1.Department of Electronics and Communication EngineeringNational Institute of Technology KurukshetraHaryanaIndia
  2. 2.Department of Electronics and Communication EngineeringUIET, Kurukshetra UniversityKurukshetraIndia
  3. 3.Department of Electrical and Electronics EngineeringNational Institute of TechnologyDelhiIndia

Personalised recommendations