Abstract
In this paper, we propose a new way of personalized music playlist generation. The mood is statistically inferred from various data sources primarily: audio, image, text, and sensors. Human’s mood is identified from facial expression and speech tones. Physical activities can be detected by sensors that humans usually carry in form of cellphones. The state-of-the-art data science techniques now make it computationally feasible to identify the actions based on very large datasets. The program learns from the data. Machine learning helps in classifying and predicting results using trained information. Using such techniques, applications can recognize or predict mood, activities for benefit to user. Emousic is a real-time mood and activity recognition use case. It is a smart music player that keeps learning your listening habits and plays the song preferred by your past habits and mood, activities, etc. It is a personalized playlist generator.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Day, M.: Emotion recognition with boosted tree classifiers. In: ICMI 2013 Proceedings of the 2013 ACM International Conference on Multimodal Interaction, pp. 531–534. https://doi.org/10.1145/2522848.2531740
Eyben, F., Wllmer, M., Schuller, B.: Opensmile the munich versatile and fast opensource audio feature extractor. In: Proceedings of ACM Multimedia, pp. 1459–1462
Londhe, R.R., Pawar, V.P.: Analysis of facial expression using LBP and artificial neural network. Int. J. Comput. Appl. 44(21), 975–8887 (2012)
Lyon, M., Akamatsu, S.: Coding facial expression with gabor wavelets. IEEE Conference on Automatic Face and Gesture Recognition, Mar 2000
Maglogiannis, Ilias, Vouyioukas, Demosthenes, Aggelopoulos, Chris: Face detection and recognition of natural human emotion using Markov random fields. Pers. Ubiquit. Comput. 13, 95–101 (2009)
Inanoglu Z, Caneel R (2005) Emotive alert: HMM-based emotion detection in voicemail messages. In: Appeared in Intelligent user Interfaces (IUI 05), 2005, San Diego, California, USA, MIT Media Lab Technical Report No. 585, Jan 2005
Nwe, T.L., Foo, S.W., De Silva, L.C.: Speech emotion recognition using hidden Markov models. Speech Commun. J. 41(4), 603–623 (2003)
El Ayadi, M.M.H., Kamel, M.S., Karray, F.: Speech emotion recognition using Gaussian mixture vector autoregressive models. In: IEEE International Conference on Acoustics, Speech and Signal Processing, 2007. ICASSP 2007
Nicholson, J., Takahashi, K., Nakatsu, R.: Emotion recognition in speech using neural networks. Neural Comput. Appl. 9, 290–296 (2000). ISSN 1433-3058
Markel, J.M., Gray, A.H.: Linear Prediction of Speech. Springer, New York (1976)
Shivhare, S,N., Khethawat, S.: Emotion detection from text. CoRR, volume abs/1205.4944 (2012)
Wu, ChungHsien, Chuang, ZeJing, Lin, YuChung: Emotion recognition from text using semantic labels and separable mixture models. ACM Trans. Asian Lang. Inf. Process. 5, 165–183 (2006)
Agrawal, A., An, A.: Unsupervised emotion detection from text using semantic and syntactic relations. In: 2012 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology, Macau, pp. 346–353 (2012)
Aloul, Z.F., Shapsough, S., Hesham, A., ElKhorzaty, Y.: Emotion recognition using mobile phones. Comput. Electr. Eng. 60, 113 (2017)
Rachuri, K.K., Musolesi, M., Mascolo, C., Rentfrow, P.J., Longworth, C., Aucinas, A. EmotionSense: a mobile phones based adaptive plat-form for experimental social psychology research, pp. 281–290. https://doi.org/10.1145/1864349.1864393 (2010)
Li, T., Ogihara, M.: Detecting emotion in music. In: ISMIR International Conference on Music Information Retrieval (2003)
Mahajan, N., Mahajan, H.: Detecting emotion in music. Int. J. Electr. Electron. Res. 2(2), 56–60 (2014). ISSN 2348-6988
Kabani, H., Khan, S., Khan, O., Tadvi, S.: Emotion based musicplayer. Int. J. Eng. Res. Gen. Sci. 3(1), 2091 (2015)
Patel, A.R., Vollal, A., Kadam, P.B., Yadav, S., Samant, R.M.: MoodyPlayer: a mood based music player. Int. J. Comput. Appl. 141(4), 0975–8887 (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Sarda, P., Halasawade, S., Padmawar, A., Aghav, J. (2019). Emousic: Emotion and Activity-Based Music Player Using Machine Learning. In: Bhatia, S., Tiwari, S., Mishra, K., Trivedi, M. (eds) Advances in Computer Communication and Computational Sciences. Advances in Intelligent Systems and Computing, vol 924. Springer, Singapore. https://doi.org/10.1007/978-981-13-6861-5_16
Download citation
DOI: https://doi.org/10.1007/978-981-13-6861-5_16
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-6860-8
Online ISBN: 978-981-13-6861-5
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)