Skip to main content

Web-Based Embodied Conversational Agents and Older People

  • Chapter
  • First Online:
Perspectives on Human-Computer Interaction Research with Older People

Part of the book series: Human–Computer Interaction Series ((HCIS))

Abstract

Within Human-Computer Interaction, there has recently been an important turn to embodied and voice-based interaction. In this chapter, we discuss our ongoing research on building online Embodied Conversational Agents (ECAs), specifically, their interactive 3D web graphics aspects. We present ECAs based on our technological pipeline, which integrates a number of free online editors, such as Adobe Fuse CC or MakeHuman, and standards, mainly BML (Behaviour Markup Language). We claim that making embodiment available for online ECAs is attainable, and advantageous over current alternatives, mostly desktop-based. In this chapter we also report on initial results of activities aimed to explore the physical appearance of ECAs for older people. A group of them (N = 14) designed female ECAs. Designing them was easy and great fun. The perspective on older-adult HCI introduced in this chapter is mostly technological, allowing for rapid online experimentations to address key issues, such as anthropomorphic aspects, in the design of ECAs with, and for, older people.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Surface meshes are most usual representations in 3D graphics; alternatives are voxels, for medical and engineering applications, or unstructured point clouds coming from 3D scanners.

References

  • Adobe (2018a) Adobe Fuse CC (beta). https://www.adobe.com/products/fuse.html/. Accessed 15 Oct 2018

  • Adobe (2018b) Mixamo. https://www.mixamo.com/. Accessed 15 Oct 2018

  • Agenjo J, Evans A, and Blat J (2013) WebGLStudio: a pipeline for WebGL scene creation. In: Proceedings of the 18th international conference on 3D web technology. ACM, NY, USA, pp 79–82

    Google Scholar 

  • Autodesk (2014) Autodesk Character Generator. https://charactergenerator.autodesk.com/. Accessed 15 Oct 2018

  • Beale R, Creed C (2009) Affective interaction: how emotional agents affect users. Int J Hum Comput Stud 67(9):755–776

    Article  Google Scholar 

  • Bickmore TW, Caruso L, Clough-gorr K et al (2005) ‘It’s just like you talk to a friend’ relational agents for older adults. Interact Comput 17:711–735

    Article  Google Scholar 

  • Carrasco R (2017) Designing virtual avatars to empower social participation among older adults. In: Proceedings of the 2017 CHI conference extended abstracts on human factors in computing systems. ACM, NY, USA, pp 259–262

    Google Scholar 

  • Chi N, Sparks O, Lin S-Y et al (2017) Pilot testing a digital pet avatar for older adults. Geriatr Nurs 38:542–547

    Article  Google Scholar 

  • Daz Productions (2018) Daz Studio. https://www.daz3d.com/daz_studio. Accessed 15 Oct 2018

  • Druga S, Breazeal C, Williams R et al (2017) Hey Google is it OK if I eat you? Initial explorations in child-agent interaction. In: IDC 2017 proceedings of the 2017 conference on interaction design and children. ACM, NY, USA, pp 595–600. https://doi.org/10.1145/3078072.3084330

  • Ebling MR (2016) Can cognitive assistants disappear? IEEE Pervasive Comput 15(3):4–6. https://doi.org/10.1109/MPRV.2016.41

    Article  Google Scholar 

  • Ekman P, Rosenberg EL (eds) (1997) What the face reveals: basic and applied studies of spontaneous expression using the facial action coding system (FACS). Oxford University Press, USA

    Google Scholar 

  • Epic Games (2018) Unreal Engine 4. https://www.unrealengine.com/. Accessed 15 Oct 2018

  • Evans A, Agenjo J, Blat J (2018) A pipeline for the creation of progressively rendered web 3D scenes. Multimed Tools Appl 77:20355–20383

    Article  Google Scholar 

  • Evans A, Romeo M, Bahrehmand A et al (2014) 3D graphics on the web: a survey. Comput Graph 41:43–61

    Article  Google Scholar 

  • Feng A, Casas D, Shapiro A (2015) Avatar reshaping and automatic rigging using a deformable model. In: Proceedings of the 8th ACM SIGGRAPH conference on motion in games, ACM, NY, USA, pp 57–64

    Google Scholar 

  • Ferreira SM, Sayago S, Blat J (2017) Older people’s production and appropriation of digital videos: an ethnographic study. Behav Inf Technol 36(6):557–574. https://doi.org/10.1080/0144929X.2016.1265150

    Article  Google Scholar 

  • Gibet S, Carreno-Medrano P, Marteau PF (2016) Challenges for the animation of expressive virtual characters: the standpoint of sign language and theatrical gestures. In: Dance notations and robot motion. Springer, pp 169–186

    Google Scholar 

  • Guo PJ (2017) Older adults learning computer programming: motivations, frustrations, and design opportunities. In: Proceedings of the 2017 CHI conference on human factors in computing systems. ACM, NY, USA, pp 7070–7083. https://doi.org/10.1145/3025453.3025945

  • Heloir A, Kipp M (2009) EMBR: a realtime animation engine for interactive embodied agents. In: 3rd international conference on affective computing and intelligent interaction and workshops, Amsterdam, pp 1–2. https://doi.org/10.1109/acii.2009.5349524

  • Heylen D, Kopp S, Marsella SC et al (2008) The next step towards a function Markup Language. In: Prendinger H, Lester J, Ishizuka M (eds) Intelligent virtual agents. IVA 2008. Lecture notes in computer science, vol 5208. Springer, Berlin, Heidelberg

    Google Scholar 

  • Huang J, Pelachaud C (2012) September. Expressive body animation pipeline for virtual agent. In: International conference on intelligent virtual agents. Springer, Berlin, Heidelberg, pp 355–362

    Chapter  Google Scholar 

  • Hyde J, Carter EJ, Kiesler S et al (2014) Assessing naturalness and emotional intensity: a perceptual study of animated facial motion. In: Proceedings of the ACM symposium on applied perception. ACM, NY, USA, pp 15–22

    Google Scholar 

  • Hyde J, Carter EJ, Kiesler S et al (2015) Using an interactive avatar’s facial expressiveness to increase persuasiveness and socialness. In: Proceedings of the 33rd annual ACM conference on human factors in computing systems. ACM, NY, USA, pp 1719–1728

    Google Scholar 

  • Karras T, Aila T, Laine S et al (2017) Audio-driven facial animation by joint end-to-end learning of pose and emotion. ACM Trans Graph 36(4):94

    Article  Google Scholar 

  • Kopp S, Krenn B, Marsella SC et al (2006) Towards a common framework for multi-modal generation: the behavior Markup Language. In: Gratch J, Young M, Aylett RS et al (eds) IVA 2006. LNCS (LNAI), vol 4133. Springer, Heidelberg, pp 205–217

    Chapter  Google Scholar 

  • Lakoff G, Johnson M (2003) Metaphors we live by. The University of Chicago Press, London

    Book  Google Scholar 

  • Lewis W, Lester C (2016) Face-to-face interaction with pedagogical agents, twenty years later. Int J Artif Intell Educ 26:25–36

    Article  Google Scholar 

  • Liu J, You M, Chen C et al (2011) Real-time speech-driven animation of expressive talking faces. Int J Gen Syst 40(04):439–455

    Article  MathSciNet  Google Scholar 

  • Llorach G, Blat J (2017) Say Hi to Eliza. In: International conference on intelligent virtual agents. Springer, pp 255–258

    Google Scholar 

  • Llorach G, Evans A, Blat J et al (2016) Web-based live speech-driven lip-sync. In: Games and virtual worlds for serious applications (VS-Games), 2016 8th international conference on. IEEE, pp 1–4

    Google Scholar 

  • MakeHuman_Team (2016) MakeHuman. http://www.makehuman.org/. Accessed 15 Oct 2018

  • Maña F, Toro I, Sayago S et al (2018). Older people’s interactive experiences through a citizen science lens: a research report. Funded by ACT (Ageing-Communication-Technologies)

    Google Scholar 

  • Martínez-Miranda J (2017) Embodied conversational agents for the detection and prevention of suicidal behaviour: current applications and open challenges. J Med Syst 41:135

    Article  Google Scholar 

  • McTear M, Callejas Z, Griol D (2016) The conversational interface: talking to smart devices. Springer

    Google Scholar 

  • Pradhan A, Mehta K and Findlater L (2018) Accessibility came by accident: use of voice-controlled intelligent personal assistants by people with disabilities. In: Proceedings of the 2018 CHI conference on human factors in computing systems. ACM, NY, USA, p 459

    Google Scholar 

  • Provoost S, Ming H, Reward J et al (2017) Embodied conversational agents in clinical psychology: a scoping review. J Med Internet Res 19(5):1–17

    Article  Google Scholar 

  • Rice M, Koh RYI, Ng J (2016) Investigating gesture-based avatar game representations in teenagers, younger and older adults. Entertain Comput 12:40–50

    Article  Google Scholar 

  • Ring L, Utami D and Bickmore T (2014) The right agent for the job? In: International conference on intelligent virtual agents. Springer, Cham, pp 374–384

    Google Scholar 

  • Rogers Y, Marsden G (2013) Does he take sugar? Moving beyond the rhetoric of compassion. Interactions 20(4):48–57

    Article  Google Scholar 

  • Romeo M (2016) Automated processes and intelligent tools in CG media production. PhD dissertation. http://hdl.handle.net/10803/373915

  • Roosendaal T (1995) Blender. https://www.blender.org/. Accessed 15 Oct 2018

  • Ruhland K, Peters CE, Andrist S, et al (2015) A review of eye gaze in virtual agents, social robotics and hci: behaviour generation, user interaction and perception. In: Computer graphics forum, vol 34, no 6, pp 299–326

    Article  Google Scholar 

  • Shamekhi, A, Liao, Q, Wang, D et al (2018) Face value? Exploring the effects of embodiment for a group facilitation agent. CHI 2018, Canada, Paper 391

    Google Scholar 

  • Tekalp AM, Ostermann J (2000) Face and 2-D mesh animation in MPEG-4. Signal Process Image Commun 15(4–5):387–421

    Article  Google Scholar 

  • Unity Technologies (2018) UNITY. https://unity3d.com/. Accessed 15 Oct 2018

  • Vinciarelli A, Esposito A, André E et al (2015) Open challenges in modelling, analysis and synthesis of human behaviour in human-human and human-machine interactions. Cogn Comput 7:397–413

    Article  Google Scholar 

  • Wanner L et al (2017) KRISTINA: a knowledge-based virtual conversation agent. In: Demazeau Y, Davidsson P, Bajo J et al (eds) Advances in practical applications of cyber-physical multi-agent systems: the PAAMS collection. PAAMS 2017. Lecture notes in computer science, vol 10349. Springer, Cham, https://doi.org/10.1007/978-3-319-59930-4_23

    Google Scholar 

  • Wei L, Deng Z (2015) A practical model for live speech-driven lip-sync. IEEE Comput Graph Appl 35(2):70–78

    Article  Google Scholar 

Download references

Acknowledgements

This work was partly funded by the EU’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 675324 (ENRICH) and under the contract number H2020-645012-RIA (KRISTINA). We also acknowledge the support from the A-C-T network.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gerard Llorach .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Llorach, G., Agenjo, J., Blat, J., Sayago, S. (2019). Web-Based Embodied Conversational Agents and Older People. In: Sayago, S. (eds) Perspectives on Human-Computer Interaction Research with Older People. Human–Computer Interaction Series. Springer, Cham. https://doi.org/10.1007/978-3-030-06076-3_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-06076-3_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-06075-6

  • Online ISBN: 978-3-030-06076-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics