Generating Personalized Virtual Agent in Speech Dialogue System for People with Dementia

  • Shota NakataniEmail author
  • Sachio Saiki
  • Masahide Nakamura
  • Kiyoshi Yasuda
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10917)


Our research group has been studying a speech communication system with a virtual agent (VA), to support person-centered care (PCC) of people with dementia (PWD). The current system uses the 3D model based on an unreal character for the VA. Because the unfamiliar appearance is to be a mental obstacle to PWD, PWD hardly accept advice and which causes a limitation in the care effects. In this paper, we develop a novel system that dynamically creates a VA based on a given facial image of real person. The proposed system constructs a three-dimensional model based on facial landmarks within the image. It then stretches and transforms some portions of the 3D model to generate facial expressions. From just a given picture, the proposed system easily generates a communication agent familiar with individual PWD. Hence, it can implement (virtual, but effective) conversations with familiar partners. We implement the prototype based on the proposed system and conduct the experiment targeting to the elderly.


Virtual agent Home elderly care Person-centered care 



This research was partially supported by the Japan Ministry of Education, Science, Sports, and Culture [Grant-in-Aid for Scientific Research (B) (16H02908, 15H02701), Grant-in-Aid for Scientific Research (A) (17H00731), Challenging Exploratory Research (15K12020)], and Tateishi Science and Technology Foundation (C) (No. 2177004).


  1. 1.
    Cabinet Office, Government of Japan: Annual Report on the Aging Society, June 2017.
  2. 2.
    Fan, H., Cao, Z., Jiang, Y., Yin, Q., Doudou, C.: Learning Deep Face Representation. CoRR abs/1403.2802 (2014).
  3. 3.
    Hinds, P.J., Roberts, T.L., Jones, H.: Whose job is it anyway? A study of human-robot interaction in a collaborative task. Hum. Comput. Interact. 19(1), 151–181 (2004)CrossRefGoogle Scholar
  4. 4.
  5. 5.
  6. 6.
  7. 7.
    MMDAgent Project Team: MMDAgent - Toolkit for Building Voice Interaction Systems.
  8. 8.
    MotionPortrait Inc.: MotionPortrait.
  9. 9.
    Sakakibara, S., Saiki, S., Nakamura, M., Yasuda, K.: Generating personalized dialogue towards daily counseling system for home dementia care. In: Duffy, V.G. (ed.) DHM 2017. LNCS, vol. 10287, pp. 161–172. Springer, Cham (2017). Scholar
  10. 10.
    Taigman, Y., Yang, M., Ranzato, M., Wolf, L.: DeepFace: closing the gap to human-level performance in face verification. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1701–1708, June 2014Google Scholar
  11. 11.
    Tamamizu, K., Sakakibara, S., Saiki, S., Nakamura, M., Yasuda, K.: Capturing activities of daily living for elderly at home based on environment change and speech dialog. In: Duffy, V.G. (ed.) DHM 2017. LNCS, vol. 10287, pp. 183–194. Springer, Cham (2017). Scholar
  12. 12.
    Tokunaga, S., Tamamizu, K., Saiki, S., Nakamura, M., Yasuda, K.: VirtualCareGiver: personalized smart elderly care. Int. J. Softw. Innov. (IJSI) 5(1), 30–43 (2016). Scholar
  13. 13.
    Zhang, Z., Luo, P., Loy, C.C., Tang, X.: Facial landmark detection by deep multi-task learning. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8694, pp. 94–108. Springer, Cham (2014). Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Shota Nakatani
    • 1
    Email author
  • Sachio Saiki
    • 1
  • Masahide Nakamura
    • 1
  • Kiyoshi Yasuda
    • 2
  1. 1.Graduate School of System Informatics Kobe UniversityKobeJapan
  2. 2.Chiba Rosai HospitalIchiharaJapan

Personalised recommendations