Advertisement

Impact of Implicit and Explicit Affective Labeling on a Recommender System’s Performance

  • Marko Tkalčič
  • Ante Odić
  • Andrej Košir
  • Jurij Franc Tasič
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7138)

Abstract

Affective labeling of multimedia content can be useful in recommender systems. In this paper we compare the effect of implicit and explicit affective labeling in an image recommender system. The implicit affective labeling method is based on an emotion detection technique that takes as input the video sequences of the users’ facial expressions. It extracts Gabor low level features from the video frames and employs a kNN machine learning technique to generate affective labels in the valence-arousal-dominance space. We performed a comparative study of the performance of a content-based recommender (CBR) system for images that uses three types of metadata to model the users and the items: (i) generic metadata, (ii) explicitly acquired affective labels and (iii) implicitly acquired affective labels with the proposed methodology. The results showed that the CBR performs best when explicit labels are used. However, implicitly acquired labels yield a significantly better performance of the CBR than generic metadata while being an unobtrusive feedback tool.

Keywords

content-based recommender system affective labeling emotion detection facial expressions affective user modeling 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Adomavicius, G., Sankaranarayanan, R., Sen, S., Tuzhilin, A.: Incorporating contextual information in recommender systems using a multidimensional approach. ACM Transactions on Information Systems (TOIS) 23(1), 103–145 (2005)CrossRefGoogle Scholar
  2. 2.
    Adomavicius, G., Tuzhilin, A.: Toward the next generation of recommender systems: a survey of the state-of-the-art and possible extensions. IEEE Transactions on Knowledge and Data Engineering 17(6), 734–749 (2005)CrossRefGoogle Scholar
  3. 3.
    Arapakis, I., Moshfeghi, Y., Joho, H., Ren, R.: Integrating facial expressions into user profiling for the improvement of a multimodal recommender system. In: IEEE International Conference on Multimedia and Expo., ICME 2009, pp. 1440–1443 (2009)Google Scholar
  4. 4.
    Bartlett, M.S., Littlewort, G.C., Frank, M.G., Lainscsek, C., Fasel, I.R., Movellan, J.R.: Automatic Recognition of Facial Actions in Spontaneous Expressions. Journal of Multimedia 1(6), 22–35 (2006)CrossRefGoogle Scholar
  5. 5.
    Chen, L., Chen, G., Xu, C., March, J., Benford, S.: EmoPlayer: A media player for video clips with affective annotations. Interacting with Computers 20(1), 17–28 (2008)CrossRefGoogle Scholar
  6. 6.
    Eckhardt, M., Picard, R.: A more effective way to label affective expressions. In: 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, pp. 1–2 (September 2009)Google Scholar
  7. 7.
    Hanjalic, A.: Adaptive extraction of highlights from a sport video based on excitement modeling. IEEE Transactions on Multimedia 7(6), 1114–1122 (2005)CrossRefGoogle Scholar
  8. 8.
    Herlocker, J.L., Konstan, J.A., Terveen, L., Riedl, J.T.: Evaluating collaborative filtering recommender systems. ACM Trans. Inf. Syst. 22(1), 5–53 (2004)CrossRefGoogle Scholar
  9. 9.
    Joho, H., Jose, J.M., Valenti, R., Sebe, N.: Exploiting facial expressions for affective video summarisation. In: Proceeding of the ACM International Conference on Image and Video Retrieval - CIVR 2009, page 1 (2009)Google Scholar
  10. 10.
    Kierkels, J.J.M., Pun, T.: Simultaneous exploitation of explicit and implicit tags in affect-based multimedia retrieval. IEEE (September 2009)Google Scholar
  11. 11.
    Lang, P.J., Bradley, M.M., Cuthbert, B.N.: International affective picture system (IAPS): Affective ratings of pictures and instruction manual. Technical Report A-8. Technical report, University of Florida (2005)Google Scholar
  12. 12.
    Lehman, E.L., Romano, J.P.: Testing Statistical Hypotheses (2005)Google Scholar
  13. 13.
    Odić, A., Kunaver, M., Tasič, J., Košir, A.: Open issues with contextual information in existing recommender system databases. In: ERK 2010 Proceedings (2010)Google Scholar
  14. 14.
    Pantic, M., Vinciarelli, A.: Implicit human-centered tagging Social Sciences. IEEE Signal Processing Magazine 26(6), 173–180 (2009)CrossRefGoogle Scholar
  15. 15.
    Picard, R.W., Daily, S.B.: Evaluating affective interactions: Alternatives to asking what users feel. In: CHI Workshop on Evaluating Affective Interfaces: Innovative Approaches (2005)Google Scholar
  16. 16.
    Pogacnik, M., Tasic, J., Meza, M., Kosir, A.: Personal Content Recommender Based on a Hierarchical User Model for the Selection of TV Programmes. User Modeling and User-Adapted Interaction: The Journal of Personalization Research 15(5), 425–457 (2005)CrossRefGoogle Scholar
  17. 17.
    Shan, M.-K., Kuo, F.-F., Chiang, M.-F., Lee, S.-Y.: Emotion-based music recommendation by affinity discovery from film music. Expert Syst. Appl. 36(4), 7666–7674 (2009)CrossRefGoogle Scholar
  18. 18.
    Soleymani, M., Davis, J., Pun, T.: A collaborative personalized affective video retrieval system. In: 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, pp. 1–2 (September 2009)Google Scholar
  19. 19.
    Tkalčič, M., Burnik, U., Košir, A.: Using affective parameters in a content-based recommender system for images. User Modeling and User-Adapted Interaction: The Journal of Personalization Research 20(4), 279–311 (2005)Google Scholar
  20. 20.
    Tkalčič, M., Odić, A., Košir, A., Tasič, J.: Comparison of an Emotion Detection Technique on Posed and Spontaneous Datasets. In: Proceedings of the 19th ERK Conference, Portorož (2010)Google Scholar
  21. 21.
    Tkalčič, M., Tasič, J., Košir, A.: The LDOS-PerAff-1 Corpus of Face Video Clips with Affective and Personality Metadata. In: Proceedings of Multimodal Corpora: Advances in Capturing, Coding and Analyzing Multimodality (Malta, 2010), LREC, p. 111 (2009)Google Scholar
  22. 22.
    Valenti, R., Yucel, Z., Gevers, T.: Robustifying eye center localization by head pose cues. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 612–618 (June 2009)Google Scholar
  23. 23.
    Wang, Y., Guan, L.: Recognizing Human Emotional State From Audiovisual Signals. IEEE Transactions on Multimedia 10(4), 659–668 (2008)CrossRefGoogle Scholar
  24. 24.
    Zeng, Z., Pantic, M., Roisman, G.I., Huang, T.S.: A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions. IEEE Trans. Pattern Analysis & Machine Intelligence 31(1), 39–58 (2009)CrossRefGoogle Scholar
  25. 25.
    Zhi, R., Ruan, Q.: Facial expression recognition based on two-dimensional discriminant locality preserving projections. Neurocomputing 71(7-9), 1730–1734 (2008)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Marko Tkalčič
    • 1
  • Ante Odić
    • 1
  • Andrej Košir
    • 1
  • Jurij Franc Tasič
    • 1
  1. 1.Faculty of electrical engineeringUniversity of LjubljanaLjubljanaSlovenia

Personalised recommendations