Skip to main content

Modeling Social Signals and Contexts in Robotic Socially Believable Behaving Systems

  • Chapter
  • First Online:
Toward Robotic Socially Believable Behaving Systems - Volume II

Part of the book series: Intelligent Systems Reference Library ((ISRL,volume 106))

Abstract

There is a need for a holistic perspective when considering aspects of natural interactions with robotic socially believable behaving systems, that must account of the cultural, social, physical, and individual (the context) features that shape interactional exchanges. Context (the physical, social and organizational context) rules individual’s social conducts and provide means to render the world sensible and interpretable in the course of everyday activities. Contextual aspects of interactional exchanges make any of it unique and requiring different interpretations and actions. A robotic socially believable system must be able to discriminate among the infinities of contextual instances and assign to each their unique meaning. This book reports on the last research efforts in making “natural” human interactional exchanges with social robotic autonomous systems devoted to improve the quality of life of their end-users while assisting them on several needs, ranging from educational settings, health care assistance, communicative disorders, and any disorder impairing either their physical, cognitive, or social functional activities.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Block N (1995) The mind as the software of the brain. In: Smith EE, Osherson DN (eds) Thinking. MIT Press, Cambridge, pp 377–425

    Google Scholar 

  2. Brandão Moniz A (2010) Anthropocentric-based robotic and autonomous systems: assessment for new organisational options. IET Working Papers Series No. WPS07/2010

    Google Scholar 

  3. Callan DE et al (2003) Neural processes underlying perceptual enhancement by visual speech gestures. NeuroReport 14:2213–2218

    Article  Google Scholar 

  4. Clavel C, Cafaro A, Campano S, Pelachaud C (2016) Fostering user engagement in face-to-face human-agent interactions: a survey. This volume

    Google Scholar 

  5. Davis MH (1983) Measuring individual differences in empathy: evidence for a multidisciplinary approach. J Personal Soc Psychol 44:113–126

    Article  Google Scholar 

  6. Esposito A, Esposito AM, Vogel C (2015) Needs and challenges in human computer interaction for processing social emotional information. Pattern Recognit Lett 66:41–51

    Article  Google Scholar 

  7. Esposito A, Fortunati L, Lugano G (2014) Modeling emotion, behaviour and context in socially believable robots and ICT interfaces. Cognit Comput 6(4):623–627

    Article  Google Scholar 

  8. Esposito A (2013) The situated multimodal facets of human communication. In: Rojc M, Campbell N (eds) Coverbal synchrony in human-machine interaction, ch. 7. CRC Press, Taylor & Francis Group, Boca Raton, pp 173–202

    Chapter  Google Scholar 

  9. Esposito A, Esposito AM (2012) On the recognition of emotional vocal expressions: motivations for an holistic approach. Cognit Process 13(2):541–550

    Article  Google Scholar 

  10. Esposito A, Esposito AM (2011) On speech and gesture synchrony. In: Esposito A et al (eds) Communication and enactment. LNCS, vol 6800. Springer, New York, pp 252–272

    Google Scholar 

  11. Esposito A, Riviello MT (2011) The cross-modal and cross-cultural processing of affective information. In: Apolloni B et al (eds) Frontiers in artificial intelligence and applications, vol 226. IOSpress, Amsterdam, pp 301–301

    Google Scholar 

  12. Esposito A (2009) The perceptual and cognitive role of visual and auditory channels in conveying emotional information. Cognit Comput J 1(2):268–278

    Article  Google Scholar 

  13. Esposito A, Riviello MT, Bourbakis N (2009) Cultural specific effects on the recognition of basic emotions: a study on Italian Subjects. In: Holzinger A, Miesenberger K (eds) USAB 2009. LNCS, vol. 5889. Springer, Berlin, pp 135–148

    Google Scholar 

  14. Esposito A (2008) Affect in multimodal information. In: Tao J, Tan T (eds) Affective information processing. Springer, Heidelberg, pp 211–234

    Google Scholar 

  15. Esposito A, Marinaro M (2007) What pauses can tell us about speech and gesture partnership. In: Esposito A et al (eds) Fundamentals of verbal and nonverbal communication and the biometric issue. NATO series human and societal dynamics, vol 18. IOS press, The Netherlands, pp 45–57

    Google Scholar 

  16. Esposito A (2007) The amount of information on emotional states conveyed by the verbal and nonverbal channels: some perceptual data. In: Stilianou Y et al (eds) Progress in nonlinear speech processing. LNCS, vol 4391. Springer, Heidelberg, pp 245–268

    Google Scholar 

  17. Esposito A (2006) Children’s organization of discourse structure through pausing means. In: Faundez\_Zanuy M et al (eds) Nonlinear analyses and algorithms for speech processing. LNCS, vol 3817. Springer, New York, pp 108–115

    Chapter  Google Scholar 

  18. Feil-Seifer D, Skinner K, Matarić MJ (2007) Benchmarks for evaluating socially assistive robotics. Interact Stud: Psychol Benchmarks Hum-Robot Intreact 8(3):423–429

    Article  Google Scholar 

  19. Fortunati L, Esposito A, Lugano G (2015) Beyond industrial robotics: social robots entering public and domestic spheres. Inf Soc: Int J 31(3):229–236

    Article  Google Scholar 

  20. Garrigan P, Kellman PJ (2008) Perceptual learning depends on perceptual constancy. PNAS 105(6):2248–2253

    Article  Google Scholar 

  21. Gnjatović M, Borovac B (2016) Toward conscious-like conversational agents. This volume

    Google Scholar 

  22. Harrington J, Kleber F, Reubold U, Stevens M (2016) The relevance of context and experience for the operation of historical sound change. This volume

    Google Scholar 

  23. Kendon AG (2005) Visible action as utterance. University Cambridge Press, New York

    Google Scholar 

  24. Koutsombogera M, Deligiannis M, Giagkou M, Papageorgiou H (2016) Towards modelling multimodal and multiparty interaction in educational settings. This volume

    Google Scholar 

  25. Kruijff-Korbayová I, Athanasopoulos G, Beck A, Cosi P, Cuayáhuitl H, Dekens T, Enescu V, Hiolle A, Kiefer B, Sahli H, Schröder M, Sommavilla G, Tesser F, Verhelst W (2011) An event-based conversational system for theNAO robot. In: IWSDS 2011. Granada, Spain

    Google Scholar 

  26. Kruijff-Korbayova I, Cuayáhuitl H, Kiefer B, Schröder M, Cosi P, Paci G, Sommavilla G, Tesser F, Sahli H, Athanasopoulos G, Wang W, Enescu V, Verhelst W (2012) Spoken language processing in a conversational system for child-robot interaction. In: Workshop on child-computer interaction

    Google Scholar 

  27. Maldonato M, Dell’Orco S (2016) Adaptive and evolutive algorithms: a natural logic for artificial mind. This volume

    Google Scholar 

  28. McGurk H, MacDonald J (1976) Hearing lips and seeing voices. Nature 264:746–748

    Article  Google Scholar 

  29. Mousavi H, Galoogahi HK, Perina A, Murino V (2016) Detecting abnormal behavioral patterns in crowd scenarios. This volume

    Google Scholar 

  30. Munhall KG, Jones JA, Callan DE, Kuratate T, Vatikiotis-Bateson E (2004) Visual prosody and speech intelligibility. Psychol Sci 15(2):133–137

    Article  Google Scholar 

  31. op den Akker HJA, Klaassen R, Nijholt A (2016) Virtual coaches for healthy lifestyle. This volume

    Google Scholar 

  32. Petrelli D, Not E (2005) User-centred design of flexible hypermedia for a mobile guide: reflections on the hyperaudio experience. User Model User-Adapt Inter 15(3–4):303

    Article  Google Scholar 

  33. Stekelenburg JJ, Vroomen J (2007) Neural correlates of multisensory integration of ecologically valid audiovisual events. J Cognit Neurosci 19(12):1964–1973

    Article  Google Scholar 

  34. Vinciarelli A (2016) Social perception in machines: The case of personality and the Big-Five traits. This volume

    Google Scholar 

  35. Vogel C (2016) Communicative sequences and survival analysis. This volume

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anna Esposito .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Esposito, A., Jain, L.C. (2016). Modeling Social Signals and Contexts in Robotic Socially Believable Behaving Systems. In: Esposito, A., Jain, L. (eds) Toward Robotic Socially Believable Behaving Systems - Volume II . Intelligent Systems Reference Library, vol 106. Springer, Cham. https://doi.org/10.1007/978-3-319-31053-4_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-31053-4_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-31052-7

  • Online ISBN: 978-3-319-31053-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics