Skip to main content

Emousic: Emotion and Activity-Based Music Player Using Machine Learning

  • Conference paper
  • First Online:

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 924))

Abstract

In this paper, we propose a new way of personalized music playlist generation. The mood is statistically inferred from various data sources primarily: audio, image, text, and sensors. Human’s mood is identified from facial expression and speech tones. Physical activities can be detected by sensors that humans usually carry in form of cellphones. The state-of-the-art data science techniques now make it computationally feasible to identify the actions based on very large datasets. The program learns from the data. Machine learning helps in classifying and predicting results using trained information. Using such techniques, applications can recognize or predict mood, activities for benefit to user. Emousic is a real-time mood and activity recognition use case. It is a smart music player that keeps learning your listening habits and plays the song preferred by your past habits and mood, activities, etc. It is a personalized playlist generator.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Day, M.: Emotion recognition with boosted tree classifiers. In: ICMI 2013 Proceedings of the 2013 ACM International Conference on Multimodal Interaction, pp. 531–534. https://doi.org/10.1145/2522848.2531740

  2. Eyben, F., Wllmer, M., Schuller, B.: Opensmile the munich versatile and fast opensource audio feature extractor. In: Proceedings of ACM Multimedia, pp. 1459–1462

    Google Scholar 

  3. Londhe, R.R., Pawar, V.P.: Analysis of facial expression using LBP and artificial neural network. Int. J. Comput. Appl. 44(21), 975–8887 (2012)

    Google Scholar 

  4. Lyon, M., Akamatsu, S.: Coding facial expression with gabor wavelets. IEEE Conference on Automatic Face and Gesture Recognition, Mar 2000

    Google Scholar 

  5. Maglogiannis, Ilias, Vouyioukas, Demosthenes, Aggelopoulos, Chris: Face detection and recognition of natural human emotion using Markov random fields. Pers. Ubiquit. Comput. 13, 95–101 (2009)

    Article  Google Scholar 

  6. Inanoglu Z, Caneel R (2005) Emotive alert: HMM-based emotion detection in voicemail messages. In: Appeared in Intelligent user Interfaces (IUI 05), 2005, San Diego, California, USA, MIT Media Lab Technical Report No. 585, Jan 2005

    Google Scholar 

  7. Nwe, T.L., Foo, S.W., De Silva, L.C.: Speech emotion recognition using hidden Markov models. Speech Commun. J. 41(4), 603–623 (2003)

    Article  Google Scholar 

  8. El Ayadi, M.M.H., Kamel, M.S., Karray, F.: Speech emotion recognition using Gaussian mixture vector autoregressive models. In: IEEE International Conference on Acoustics, Speech and Signal Processing, 2007. ICASSP 2007

    Google Scholar 

  9. Nicholson, J., Takahashi, K., Nakatsu, R.: Emotion recognition in speech using neural networks. Neural Comput. Appl. 9, 290–296 (2000). ISSN 1433-3058

    Article  MATH  Google Scholar 

  10. Markel, J.M., Gray, A.H.: Linear Prediction of Speech. Springer, New York (1976)

    Book  MATH  Google Scholar 

  11. Shivhare, S,N., Khethawat, S.: Emotion detection from text. CoRR, volume abs/1205.4944 (2012)

    Google Scholar 

  12. Wu, ChungHsien, Chuang, ZeJing, Lin, YuChung: Emotion recognition from text using semantic labels and separable mixture models. ACM Trans. Asian Lang. Inf. Process. 5, 165–183 (2006)

    Article  Google Scholar 

  13. Agrawal, A., An, A.: Unsupervised emotion detection from text using semantic and syntactic relations. In: 2012 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology, Macau, pp. 346–353 (2012)

    Google Scholar 

  14. Aloul, Z.F., Shapsough, S., Hesham, A., ElKhorzaty, Y.: Emotion recognition using mobile phones. Comput. Electr. Eng. 60, 113 (2017)

    Google Scholar 

  15. Rachuri, K.K., Musolesi, M., Mascolo, C., Rentfrow, P.J., Longworth, C., Aucinas, A. EmotionSense: a mobile phones based adaptive plat-form for experimental social psychology research, pp. 281–290. https://doi.org/10.1145/1864349.1864393 (2010)

  16. Li, T., Ogihara, M.: Detecting emotion in music. In: ISMIR International Conference on Music Information Retrieval (2003)

    Google Scholar 

  17. Mahajan, N., Mahajan, H.: Detecting emotion in music. Int. J. Electr. Electron. Res. 2(2), 56–60 (2014). ISSN 2348-6988

    MathSciNet  Google Scholar 

  18. Kabani, H., Khan, S., Khan, O., Tadvi, S.: Emotion based musicplayer. Int. J. Eng. Res. Gen. Sci. 3(1), 2091 (2015)

    Google Scholar 

  19. Patel, A.R., Vollal, A., Kadam, P.B., Yadav, S., Samant, R.M.: MoodyPlayer: a mood based music player. Int. J. Comput. Appl. 141(4), 0975–8887 (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pranav Sarda .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sarda, P., Halasawade, S., Padmawar, A., Aghav, J. (2019). Emousic: Emotion and Activity-Based Music Player Using Machine Learning. In: Bhatia, S., Tiwari, S., Mishra, K., Trivedi, M. (eds) Advances in Computer Communication and Computational Sciences. Advances in Intelligent Systems and Computing, vol 924. Springer, Singapore. https://doi.org/10.1007/978-981-13-6861-5_16

Download citation

Publish with us

Policies and ethics