Skip to main content

Object Oriented Design for Multiple Modalities in Affective Interaction

  • Chapter
  • First Online:
Object-Oriented User Interfaces for Personalized Mobile Learning

Part of the book series: Intelligent Systems Reference Library ((ISRL,volume 64))

Abstract

The purpose of this chapter is to investigate how an object oriented (OO) architecture can be adapted to cope with multimodal emotion recognition applications with mobile interfaces. A large obstacle in this direction is the fact that mobile phones differ from desktop computers since mobile phones are not capable of performing the demanding processing required as in emotion recognition. To surpass this fact, in our approach, mobile phones are required to transmit all data collected to a server which is responsible for performing, among other, emotion recognition. The object oriented architecture that we have created, combines evidence from multiple modalities of interaction, namely the mobile device’s keyboard and the mobile device’s microphone, as well as data from user stereotypes. All collected information is classified into well-structured objects which have their own properties and methods. The resulting emotion detection platform is capable of processing and re-transmitting information from different mobile sources of multimodal data during human–computer interaction. The interface that has been used as a test bed for the affective mobile interaction is that of an educational m-learning application.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Alepis Ε, Virvou Μ (2006) Emotional intelligence: constructing user stereotypes for affective bi-modal interaction. In: Knowledge-based intelligent information and engineering systems 2006. Lecture notes in computer science LNAI-I, vol 4251. Springer, Heidelberg, pp 435–442

    Google Scholar 

  • Fishburn PC (1967) Additive utilities with incomplete product set: applications to priorities and assignments. Oper Res 15(3):537

    Google Scholar 

  • Gee P, Coventry KR, Birkenhead D (2005) Mood state and gambling: using mobile telephones to track emotions. Br J Psychol 96(1):53–66

    Article  Google Scholar 

  • Hwang CL, Yoon K (1981) Multiple attribute decision making: methods and applications. Lecture notes in economics and mathematical systems, vol 186. Springer, Heidelberg

    Google Scholar 

  • Isomursu M, Tähti M, Väinämö S, Kuutti K (2007) Experimental evaluation of five methods for collecting emotions in field settings with mobile applications. Int J Hum Comput Stud 65(4):404–418

    Article  Google Scholar 

  • Kay J (2000) Stereotypes, student models and scrutability. In: Gauthier G, Frasson C, VanLehn K (eds) Proceedings of the 5th international conference on intelligent tutoring systems. Lecture notes in computer science, vol 1839. Springer, Heidelberg, pp 19–30

    Google Scholar 

  • Neerincx M, Streefkerk JW (2003) Interacting in desktop and mobile context: emotion, trust, and task performance. Lecture notes in computer science (Lecture notes in artificial intelligence and Lecture notes in bioinformatics), vol 2875. pp 119–132

    Google Scholar 

  • Rich E (1983) Users are individuals: individualizing user models. Int J Man Mach Stud 18:199–214

    Article  Google Scholar 

  • Tsihrintzis G, Virvou M, Stathopoulou IO, Alepis E (2008) On improving visual-facial emotion recognition with audio-lingual and keyboard stroke pattern information, Web intelligence and intelligent agent technology, WI-IAT’08, vol 1. pp 810–816

    Google Scholar 

  • Virvou M, Tsihrintzis G, Alepis E, Stathopoulou IO, Kabassi K (2007) Combining empirical studies of audio-lingual and visual-facial modalities for emotion recognition. In: Knowledge-based intelligent information and engineering systems—KES 2007. Lecture notes in computer science (Lecture notes in artificial intelligence), vol 4693/2007. Springer, Berlin, pp 1130–1137

    Google Scholar 

  • Virvou M, Tsihrintzis GA, Alepis E, Stathopoulou I-O, Kabassi K (2012) Emotion recognition: empirical studies towards the combination of audio-lingual and visual-facial modalities through multi-attribute decision making. Int J Artif Intell Tools 21(2) (Art. no 1240001)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Efthimios Alepis .

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Alepis, E., Virvou, M. (2014). Object Oriented Design for Multiple Modalities in Affective Interaction. In: Object-Oriented User Interfaces for Personalized Mobile Learning. Intelligent Systems Reference Library, vol 64. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-53851-3_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-53851-3_8

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-53850-6

  • Online ISBN: 978-3-642-53851-3

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics