Skip to main content

An Automatic Emotion Recognition System for Annotating Spotify’s Songs

  • Conference paper
  • First Online:
Book cover On the Move to Meaningful Internet Systems: OTM 2019 Conferences (OTM 2019)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 11877))

Abstract

The recognition of emotions for annotating large-size music datasets is still an open challenge. The problem lies in that most of the solutions require the audio of the songs and user/expert intervention during certain phases of the recognition process. In this paper, we propose an automatic solution for overcoming these drawbacks. It consists of a heterogeneous set of machine learning models that have been developed from Spotify’s Web data services and miner tools. In order to improve the accuracy of resulting annotations, each model is specialized in recognizing a class of emotions. These models have been validated by using the AcousticBrainz database and have been exported to be integrated into a music emotion recognition system. It has been used to emotionally annotate the Spotify music database which is composed by more than 30 million songs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. AcousticBrainz (2015). http://acousticbrainz.org/

  2. DEAM: Mediaeval database for emotional analysis in music (2016). http://cvml.unige.ch/databases/DEAM/

  3. Joblib: Running python functions as pipeline jobs (2019). https://joblib.readthedocs.io/

  4. Álvarez, P., Beltrán, J.R., Baldassarri, S.: Dj-running: wearables and emotions for improving running performance. In: Ahram, T., Karwowski, W., Taiar, R. (eds.) Human Systems Engineering and Design, pp. 847–853. Springer International Publishing, Cham (2019)

    Chapter  Google Scholar 

  5. Ayata, D., Yaslan, Y., Kamasak, M.: Emotion based music recommendation system using wearable physiological sensors. IEEE Trans. Consum. Electron. 64(2), 1–1 (2018). https://doi.org/10.1109/TCE.2018.2844736

    Article  Google Scholar 

  6. Bergstra, J., Bengio, Y.: Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13(1), 281–305 (2012). http://dl.acm.org/citation.cfm?id=2503308.2188395

    MathSciNet  MATH  Google Scholar 

  7. Cheung, W.L., Lu, G.: Music emotion annotation by machine learning. In; 2008 IEEE 10th Workshop on Multimedia Signal Processing, pp. 580–585 (2008)

    Google Scholar 

  8. Chiu, M.C., Ko, L.W.: Develop a personalized intelligent music selection system based on heart rate variability and machine learning. Multimed. Tools Appli. 76, 15607–15639 (2016)

    Article  Google Scholar 

  9. Fessahaye, F., et al.: T-recsys: A novel music recommendation system using deep learning. In: 2019 IEEE International Conference on Consumer Electronics (ICCE), pp. 1–6 (2019). https://doi.org/10.1109/ICCE.2019.8662028

  10. Germain, A., Chakareski, J.: Spotify me: facebook-assisted automatic playlist generation. In: 2013 IEEE 15th International Workshop on Multimedia Signal Processing (MMSP), pp. 25–28, September 2013. https://doi.org/10.1109/MMSP.2013.6659258

  11. Girardi, D., Lanubile, F., Novielli, N.: Emotion detection using noninvasive low cost sensors. In: 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 125–130, October 2017. https://doi.org/10.1109/ACII.2017.8273589

  12. Hastarita Rachman, F., Sarno, R., Fatichah, C.: Music emotion classification based on lyrics-audio using corpus based emotion. Int. J. Electr. Comput. Eng. (IJECE) 8(3), 1720–1730 (2018). https://doi.org/10.11591/ijece.v8i3.pp1720-1730

    Article  Google Scholar 

  13. Huang, W., Knapp, R.B.: An exploratory study of population differences based on massive database of physiological responses to music. In: 7th International Conference on Affective Computing and Intelligent Interaction (ACII 2017), pp. 524–530. San Antonio, TX, USA (2017). https://doi.org/10.1109/ACII.2017.8273649

  14. Jones, E., Oliphant, T., Peterson, P., et al.: SciPy: Open source scientific tools for Python (2019). http://www.scipy.org/

  15. Kim, J., André, E.: Emotion recognition based on physiological changes in music listening. IEEE Trans. Pattern Anal. Mach. Intell. 30(12), 2067–2083 (2008). https://doi.org/10.1109/TPAMI.2008.26

    Article  Google Scholar 

  16. Laurier, C., Sordo, M., Serrá, J., Herrera, P.: Music mood representations from social tags. In: Proceedings of the 10th International Society for Music Information Retrieval Conference (ISMIR 2009), pp. 381–386, Kobe, Japan (2009)

    Google Scholar 

  17. Liu, H., Fang, Y., Huang, Q.: Music emotion recognition using a variant of recurrent neural network. In: 2018 International Conference on Mathematics, Modeling, Simulation and Statistics Application (MMSSA 2018), vol. 164, pp. 15–18, Atlantis Press (2019)

    Google Scholar 

  18. Madathil, M.: Music recommendation system spotify - collaborative filtering, reports in Computer Music. Aachen University, Germany (2017)

    Google Scholar 

  19. Nalini, N., Palanivel, S.: Music emotion recognition: the combined evidence of mfcc and residual phase. Egypt. Inf. J. 17(1), 1–10 (2016). https://doi.org/10.1016/j.eij.2015.05.004

    Article  Google Scholar 

  20. Panda, R., Malheiro, R., Paiva, R.P.: Novel audio features for music emotion recognition. IEEE Trans. Affect. Comput. Early Access (2018). https://doi.org/10.1109/TAFFC.2018.2820691

    Article  Google Scholar 

  21. Pedregosa, F., et al.: Scikit-learn: machine learning in python. J. Mach. Learn. Res. 12, 2825–2830 (2011)

    MathSciNet  MATH  Google Scholar 

  22. Pichl, M., Zangerle, E., Specht, G.: Combining spotify and twitter data for generating a recent and public dataset for music recommendation. In: Proceedings of the 26th GI-Workshop Grundlagen von Datenbanken (GvDB 2014), Ritten, Italy (2015)

    Google Scholar 

  23. Russell, J.: A circumplex model of affect. J. Pers. Soc. Psychol. 39(6), 1161–1178 (1980)

    Article  Google Scholar 

  24. Scherer, K.R.: Which emotions can be induced by music? What are the underlying mechanisms? And how can we measure them. J. New Music Res. 33(3), 239–251 (2004). https://doi.org/10.1080/0929821042000317822

    Article  Google Scholar 

  25. Senachakr, P., Thammasan, N., Ichi Fukui, K., Numao, M.: Music-emotion Recognition Based on Wearable Dry-electrode Electroencephalogram, pp. 235–243 (2017). https://doi.org/10.1142/9789813234079_0018

  26. Shao, X., Cheng, Z., Kankanhalli, M.S.: Music auto-tagging based on the unified latent semantic modeling. Multimed. Tools Appl. 78(1), 161–176 (2019). https://doi.org/10.1007/s11042-018-5632-2

    Article  Google Scholar 

  27. Tian, L., et al.: Recognizing induced emotions of movie audiences: are induced and perceived emotions the same. In: 7th International Conference on Affective Computing and Intelligent Interaction (ACII 2017), pp. 28–35, San Antonio, TX, USA, October 2017

    Google Scholar 

  28. Trohidis, K., Tsoumakas, G., Kalliris, G., Vlahavas, I.: Multi-label classification of music by emotion. EURASIP J. Audio, Speech Music Process. 2011(1), 4 (2011). https://doi.org/10.1186/1687-4722-2011-426793

    Article  Google Scholar 

  29. Yang, X., Dong, Y., Li, J.: Review of data features-based music emotion recognition methods. Multimed. Syst. 24(4), 365–389 (2018). https://doi.org/10.1007/s00530-017-0559-4

    Article  Google Scholar 

  30. Yang, Y., Lin, Y., Su, Y., Chen, H.H.: Music emotion classification: a regression approach. In: 2007 IEEE International Conference on Multimedia and Expo, pp. 208–211 (2007). https://doi.org/10.1109/ICME.2007.4284623

  31. Yang, Y.H., Chen, H.: Machine recognition of music emotion: a review. ACM Trans. Intell. Syst. Technol. (TIST) 3(3), 40:1–40:30 (2012)

    Google Scholar 

Download references

Acknowledgments

This work has been supported by the TIN2015-72241-EXP and TIN2017-84796-C2-2-R projects, granted by the Spanish Ministerio de Economía y Competitividad, and the DisCo-T21-17R and Affective-Lab-T25-17D projects, granted by the Aragonese Government.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to P. Álvarez .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

de Quirós, J.G., Baldassarri, S., Beltrán, J.R., Guiu, A., Álvarez, P. (2019). An Automatic Emotion Recognition System for Annotating Spotify’s Songs. In: Panetto, H., Debruyne, C., Hepp, M., Lewis, D., Ardagna, C., Meersman, R. (eds) On the Move to Meaningful Internet Systems: OTM 2019 Conferences. OTM 2019. Lecture Notes in Computer Science(), vol 11877. Springer, Cham. https://doi.org/10.1007/978-3-030-33246-4_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-33246-4_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-33245-7

  • Online ISBN: 978-3-030-33246-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics