Advertisement

Web-Based Embodied Conversational Agents and Older People

  • Gerard LlorachEmail author
  • Javi Agenjo
  • Josep Blat
  • Sergio Sayago
Chapter
Part of the Human–Computer Interaction Series book series (HCIS)

Abstract

Within Human-Computer Interaction, there has recently been an important turn to embodied and voice-based interaction. In this chapter, we discuss our ongoing research on building online Embodied Conversational Agents (ECAs), specifically, their interactive 3D web graphics aspects. We present ECAs based on our technological pipeline, which integrates a number of free online editors, such as Adobe Fuse CC or MakeHuman, and standards, mainly BML (Behaviour Markup Language). We claim that making embodiment available for online ECAs is attainable, and advantageous over current alternatives, mostly desktop-based. In this chapter we also report on initial results of activities aimed to explore the physical appearance of ECAs for older people. A group of them (N = 14) designed female ECAs. Designing them was easy and great fun. The perspective on older-adult HCI introduced in this chapter is mostly technological, allowing for rapid online experimentations to address key issues, such as anthropomorphic aspects, in the design of ECAs with, and for, older people.

Notes

Acknowledgements

This work was partly funded by the EU’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 675324 (ENRICH) and under the contract number H2020-645012-RIA (KRISTINA). We also acknowledge the support from the A-C-T network.

References

  1. Adobe (2018a) Adobe Fuse CC (beta). https://www.adobe.com/products/fuse.html/. Accessed 15 Oct 2018
  2. Adobe (2018b) Mixamo. https://www.mixamo.com/. Accessed 15 Oct 2018
  3. Agenjo J, Evans A, and Blat J (2013) WebGLStudio: a pipeline for WebGL scene creation. In: Proceedings of the 18th international conference on 3D web technology. ACM, NY, USA, pp 79–82Google Scholar
  4. Autodesk (2014) Autodesk Character Generator. https://charactergenerator.autodesk.com/. Accessed 15 Oct 2018
  5. Beale R, Creed C (2009) Affective interaction: how emotional agents affect users. Int J Hum Comput Stud 67(9):755–776CrossRefGoogle Scholar
  6. Bickmore TW, Caruso L, Clough-gorr K et al (2005) ‘It’s just like you talk to a friend’ relational agents for older adults. Interact Comput 17:711–735CrossRefGoogle Scholar
  7. Carrasco R (2017) Designing virtual avatars to empower social participation among older adults. In: Proceedings of the 2017 CHI conference extended abstracts on human factors in computing systems. ACM, NY, USA, pp 259–262Google Scholar
  8. Chi N, Sparks O, Lin S-Y et al (2017) Pilot testing a digital pet avatar for older adults. Geriatr Nurs 38:542–547CrossRefGoogle Scholar
  9. Daz Productions (2018) Daz Studio. https://www.daz3d.com/daz_studio. Accessed 15 Oct 2018
  10. Druga S, Breazeal C, Williams R et al (2017) Hey Google is it OK if I eat you? Initial explorations in child-agent interaction. In: IDC 2017 proceedings of the 2017 conference on interaction design and children. ACM, NY, USA, pp 595–600.  https://doi.org/10.1145/3078072.3084330
  11. Ebling MR (2016) Can cognitive assistants disappear? IEEE Pervasive Comput 15(3):4–6.  https://doi.org/10.1109/MPRV.2016.41CrossRefGoogle Scholar
  12. Ekman P, Rosenberg EL (eds) (1997) What the face reveals: basic and applied studies of spontaneous expression using the facial action coding system (FACS). Oxford University Press, USAGoogle Scholar
  13. Epic Games (2018) Unreal Engine 4. https://www.unrealengine.com/. Accessed 15 Oct 2018
  14. Evans A, Agenjo J, Blat J (2018) A pipeline for the creation of progressively rendered web 3D scenes. Multimed Tools Appl 77:20355–20383CrossRefGoogle Scholar
  15. Evans A, Romeo M, Bahrehmand A et al (2014) 3D graphics on the web: a survey. Comput Graph 41:43–61CrossRefGoogle Scholar
  16. Feng A, Casas D, Shapiro A (2015) Avatar reshaping and automatic rigging using a deformable model. In: Proceedings of the 8th ACM SIGGRAPH conference on motion in games, ACM, NY, USA, pp 57–64Google Scholar
  17. Ferreira SM, Sayago S, Blat J (2017) Older people’s production and appropriation of digital videos: an ethnographic study. Behav Inf Technol 36(6):557–574.  https://doi.org/10.1080/0144929X.2016.1265150CrossRefGoogle Scholar
  18. Gibet S, Carreno-Medrano P, Marteau PF (2016) Challenges for the animation of expressive virtual characters: the standpoint of sign language and theatrical gestures. In: Dance notations and robot motion. Springer, pp 169–186Google Scholar
  19. Guo PJ (2017) Older adults learning computer programming: motivations, frustrations, and design opportunities. In: Proceedings of the 2017 CHI conference on human factors in computing systems. ACM, NY, USA, pp 7070–7083.  https://doi.org/10.1145/3025453.3025945
  20. Heloir A, Kipp M (2009) EMBR: a realtime animation engine for interactive embodied agents. In: 3rd international conference on affective computing and intelligent interaction and workshops, Amsterdam, pp 1–2.  https://doi.org/10.1109/acii.2009.5349524
  21. Heylen D, Kopp S, Marsella SC et al (2008) The next step towards a function Markup Language. In: Prendinger H, Lester J, Ishizuka M (eds) Intelligent virtual agents. IVA 2008. Lecture notes in computer science, vol 5208. Springer, Berlin, HeidelbergGoogle Scholar
  22. Huang J, Pelachaud C (2012) September. Expressive body animation pipeline for virtual agent. In: International conference on intelligent virtual agents. Springer, Berlin, Heidelberg, pp 355–362CrossRefGoogle Scholar
  23. Hyde J, Carter EJ, Kiesler S et al (2014) Assessing naturalness and emotional intensity: a perceptual study of animated facial motion. In: Proceedings of the ACM symposium on applied perception. ACM, NY, USA, pp 15–22Google Scholar
  24. Hyde J, Carter EJ, Kiesler S et al (2015) Using an interactive avatar’s facial expressiveness to increase persuasiveness and socialness. In: Proceedings of the 33rd annual ACM conference on human factors in computing systems. ACM, NY, USA, pp 1719–1728Google Scholar
  25. Karras T, Aila T, Laine S et al (2017) Audio-driven facial animation by joint end-to-end learning of pose and emotion. ACM Trans Graph 36(4):94CrossRefGoogle Scholar
  26. Kopp S, Krenn B, Marsella SC et al (2006) Towards a common framework for multi-modal generation: the behavior Markup Language. In: Gratch J, Young M, Aylett RS et al (eds) IVA 2006. LNCS (LNAI), vol 4133. Springer, Heidelberg, pp 205–217CrossRefGoogle Scholar
  27. Lakoff G, Johnson M (2003) Metaphors we live by. The University of Chicago Press, LondonCrossRefGoogle Scholar
  28. Lewis W, Lester C (2016) Face-to-face interaction with pedagogical agents, twenty years later. Int J Artif Intell Educ 26:25–36CrossRefGoogle Scholar
  29. Liu J, You M, Chen C et al (2011) Real-time speech-driven animation of expressive talking faces. Int J Gen Syst 40(04):439–455MathSciNetCrossRefGoogle Scholar
  30. Llorach G, Blat J (2017) Say Hi to Eliza. In: International conference on intelligent virtual agents. Springer, pp 255–258Google Scholar
  31. Llorach G, Evans A, Blat J et al (2016) Web-based live speech-driven lip-sync. In: Games and virtual worlds for serious applications (VS-Games), 2016 8th international conference on. IEEE, pp 1–4Google Scholar
  32. MakeHuman_Team (2016) MakeHuman. http://www.makehuman.org/. Accessed 15 Oct 2018
  33. Maña F, Toro I, Sayago S et al (2018). Older people’s interactive experiences through a citizen science lens: a research report. Funded by ACT (Ageing-Communication-Technologies)Google Scholar
  34. Martínez-Miranda J (2017) Embodied conversational agents for the detection and prevention of suicidal behaviour: current applications and open challenges. J Med Syst 41:135CrossRefGoogle Scholar
  35. McTear M, Callejas Z, Griol D (2016) The conversational interface: talking to smart devices. SpringerGoogle Scholar
  36. Pradhan A, Mehta K and Findlater L (2018) Accessibility came by accident: use of voice-controlled intelligent personal assistants by people with disabilities. In: Proceedings of the 2018 CHI conference on human factors in computing systems. ACM, NY, USA, p 459Google Scholar
  37. Provoost S, Ming H, Reward J et al (2017) Embodied conversational agents in clinical psychology: a scoping review. J Med Internet Res 19(5):1–17CrossRefGoogle Scholar
  38. Rice M, Koh RYI, Ng J (2016) Investigating gesture-based avatar game representations in teenagers, younger and older adults. Entertain Comput 12:40–50CrossRefGoogle Scholar
  39. Ring L, Utami D and Bickmore T (2014) The right agent for the job? In: International conference on intelligent virtual agents. Springer, Cham, pp 374–384Google Scholar
  40. Rogers Y, Marsden G (2013) Does he take sugar? Moving beyond the rhetoric of compassion. Interactions 20(4):48–57CrossRefGoogle Scholar
  41. Romeo M (2016) Automated processes and intelligent tools in CG media production. PhD dissertation. http://hdl.handle.net/10803/373915
  42. Roosendaal T (1995) Blender. https://www.blender.org/. Accessed 15 Oct 2018
  43. Ruhland K, Peters CE, Andrist S, et al (2015) A review of eye gaze in virtual agents, social robotics and hci: behaviour generation, user interaction and perception. In: Computer graphics forum, vol 34, no 6, pp 299–326CrossRefGoogle Scholar
  44. Shamekhi, A, Liao, Q, Wang, D et al (2018) Face value? Exploring the effects of embodiment for a group facilitation agent. CHI 2018, Canada, Paper 391Google Scholar
  45. Tekalp AM, Ostermann J (2000) Face and 2-D mesh animation in MPEG-4. Signal Process Image Commun 15(4–5):387–421CrossRefGoogle Scholar
  46. Unity Technologies (2018) UNITY. https://unity3d.com/. Accessed 15 Oct 2018
  47. Vinciarelli A, Esposito A, André E et al (2015) Open challenges in modelling, analysis and synthesis of human behaviour in human-human and human-machine interactions. Cogn Comput 7:397–413CrossRefGoogle Scholar
  48. Wanner L et al (2017) KRISTINA: a knowledge-based virtual conversation agent. In: Demazeau Y, Davidsson P, Bajo J et al (eds) Advances in practical applications of cyber-physical multi-agent systems: the PAAMS collection. PAAMS 2017. Lecture notes in computer science, vol 10349. Springer, Cham,  https://doi.org/10.1007/978-3-319-59930-4_23Google Scholar
  49. Wei L, Deng Z (2015) A practical model for live speech-driven lip-sync. IEEE Comput Graph Appl 35(2):70–78CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Gerard Llorach
    • 1
    Email author
  • Javi Agenjo
    • 2
  • Josep Blat
    • 2
  • Sergio Sayago
    • 3
  1. 1.Hörzentrum Oldenburg GmbH & Medizinische Physik and Cluster of Excellence ‘Hearing4all’Universität OldenburgOldenburgGermany
  2. 2.Universitat Pompeu FabraBarcelonaSpain
  3. 3.Universitat de BarcelonaBarcelonaSpain

Personalised recommendations