Multimedia Tools and Applications

, Volume 78, Issue 3, pp 3267–3276 | Cite as

Personalized smart home audio system with automatic music selection based on emotion

  • Dongwann Kang
  • Sanghyun SeoEmail author


In this paper, we introduce a personalized home audio system that uses IoT technologies to recommend and play music remotely based on a user’s estimated emotion. This system estimates a user’s emotion based on texts on their smartphone collected during outdoor activities. Based on this emotion, our system then searches for music that matches it from a music database. The system automatically detects the user when they return home, and plays the recommended music via a connected audio system. Consequently, personalized emotion-based music recommendation is provided transparently without the user’s awareness.


Internet of things Emotion estimation Music recommendation 



This work has supported by Seoul National University of Science & Technology and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2016R1D1A1B03935378).


  1. 1.
  2. 2.
    Apache tomcat ®;.
  3. 3.
    August home.
  4. 4.
    Bigand E, Vieillard S, Madurell F, Marozeau J, Dacquet A (2005) Multidimensional scaling of emotional responses to music: The effect of musical expertise and of the duration of the excerpts. Cogn Emotion 19(8):1113–1139. CrossRefGoogle Scholar
  5. 5.
    Bradley MM, Lang PJ (1999) Affective norms for english words (anew): instruction manual and affective ratingsGoogle Scholar
  6. 6.
    Feng Y, Zhuang Y, Pan Y (2003) Music information retrieval by detecting mood via computational media aesthetics. In: Proceedings IEEE/WIC International Conference on Web Intelligence (WI 2003)., pp 235–241
  7. 7.
    Fernandes E, Jung J, Prakash A (2016) Security analysis of emerging smart home applications. In: 2016 IEEE Symposium on security and privacy (SP)., pp 636–654
  8. 8.
    Han BJ, Rho S, Jun S, Hwang E (2010) Music emotion classification and context-based music recommendation. Multimed Tool Appl 47 (3):433–460. CrossRefGoogle Scholar
  9. 9.
    Hello heart.
  10. 10.
    Jun S, Kim D, Jeon M, Rho S, Hwang E (2015) Social mix: automatic music recommendation and mixing scheme based on social network analysis. J Supercomput 71(6):1933–1954. CrossRefGoogle Scholar
  11. 11.
    Kang D, Shim H, Yoon K (2018) A method for extracting emotion using colors comprise the painting image. Multimed Tool Appl 77(4):4985–5002. CrossRefGoogle Scholar
  12. 12.
    Kim M, Park SO (2013) Group affinity based social trust model for an intelligent movie recommender system. Multimed Tool Appl 64(2):505–516. CrossRefGoogle Scholar
  13. 13.
    Kummer M (2017) Review of the ecobee4 smart thermostat with homekitGoogle Scholar
  14. 14.
    Lee T, Lim H, Kim D W, Hwang S, Yoon K (2016) System for matching paintings with music based on emotions. In: SIGGRAPH ASIA 2016 Technical briefs, SA ’16. ACM, New York, pp 31:1–31:4
  15. 15.
    Nack F, Dorai C, Venkatesh S (2001) Computational media aesthetics: finding meaning beautiful. IEEE MultiMedia 8(4):10–12. CrossRefGoogle Scholar
  16. 16.
    Phan D, Siong LY, Pathirana PN, Seneviratne A (2015) Smartwatch: Performance evaluation for long-term heart rate monitoring. In: 2015 International symposium on bioelectronics and bioinformatics (ISBB)., pp 144–147
  17. 17.
  18. 18.
    Rho S, Song S, Nam Y, Hwang E, Kim M (2013) Implementing situation-aware and user-adaptive music recommendation service in semantic web and real-time multimedia computing environment. Multimed Tool Appl 65(2):259–282. CrossRefGoogle Scholar
  19. 19.
    Rho S, Yeo SS (2013) Bridging the semantic gap in multimedia emotion/mood recognition for ubiquitous computing environment. J Supercomputing 65(1):274–286. CrossRefGoogle Scholar
  20. 20.
    Russell JA (1980) A circumplex model of affect. J Person Soc Psychol 39(6):1161CrossRefGoogle Scholar
  21. 21.
    Schimmack U, Grob A. (2000) Dimensional models of core affect: a quantitative comparison by means of structural equation modeling. European J Personal 14 (4):325–345.<325::AID-PER380>3.0.CO;2-I CrossRefGoogle Scholar
  22. 22.
    Schimmack U, Rainer R (2002) Experiencing activation: energetic arousal and tense arousal are not mixtures of valence and activation. Emotion 2(4):412CrossRefGoogle Scholar
  23. 23.
    Seo S, Kang D (2016) Study on predicting sentiment from images using categorical and sentimental keyword-based image retrieval. J Supercomput 72(9):3478–3488. CrossRefGoogle Scholar
  24. 24.
    Stojkoska BLR, Trivodaliev KV (2017) A review of internet of things for smart home: Challenges and solutions. J Cleaner Product 140:1454–1464. CrossRefGoogle Scholar
  25. 25.
    Thayer RE (1990) The biopsychology of mood and arousal. Oxford University Press, OxfordGoogle Scholar
  26. 26.
    Warriner AB, Kuperman V, Brysbaert M (2013) Norms of valence, arousal, and dominance for 13,915 english lemmas. Behav Res Methods 45(4):1191–1207. CrossRefGoogle Scholar
  27. 27.
    Yang GZ, Yang G (2006) Body sensor networks, vol 1. Springer, BerlinCrossRefGoogle Scholar
  28. 28.
    Yang YH, Lin YC, Su YF, Chen HH (2008) A regression approach to music emotion recognition. IEEE Trans Audio, Speech, Language Process 16(2):448–457. CrossRefGoogle Scholar
  29. 29.
    Zhang D, Ning H, Xu KS, Lin F, Yang LT (2012) Internet of things. J UCS 18:1069–1071Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Computer Science and EngineeringSeoul National University of Science and TechnologySeoulKorea
  2. 2.Division of Media SoftwareSungkyul UniversityAnyangRepublic of Korea

Personalised recommendations