Advertisement

Employing Virtual Humans for Interaction, Assistance and Information Provision in Ambient Intelligence Environments

  • Chryssi BirlirakiEmail author
  • Dimitris GrammenosEmail author
  • Constantine Stephanidis
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9189)

Abstract

This paper reports on the design, development and evaluation of a framework which implements virtual humans for information provision. The framework can be used to create interactive multimedia information visualizations (e.g., images, text, audio, videos, 3D models) and provides a dynamic data modeling mechanism for storage and retrieval and implements communication through multimodal interaction techniques. The interaction may involve human-to-agent, agent-to-environment or agent-to-agent communication. The framework supports alternative roles for the virtual agents who may act as assistants for existing systems, standalone “applications” or even as integral parts of emerging smart environments. Finally, an evaluation study was conducted with the participation of 10 people to study the developed system in terms of usability and effectiveness, when it is employed as an assisting mechanism for another application. The evaluation results were highly positive and promising, confirming the system’s usability and encouraging further research in this area.

Keywords

Virtual humans Virtual agents Virtual assistants Embodied agents Multimodal interaction User-agent interaction Usability evaluation 

Notes

Acknowledgements

The work reported in this paper has been conducted in the context of the AmI Programme of the Institute of Computer Science (ICS) of the Foundation for Research and Technology-Hellas (FORTH). The authors would like to express their gratitude to Anthony Katzourakis for the artistic work and 3D modelling and to the Signal Processing Laboratory (SPL) of ICS-FORTH and especially Elena Karamichali for creating the speech recognition system.

References

  1. 1.
    Weiser, M.: The Computer for the twenty-first century. Sci. Am. 265, 94–104 (1991)CrossRefGoogle Scholar
  2. 2.
    Murray, J.: Composing multimodality. In: Lutkewitte, C. (ed.) Multimodal Composition: A Critical Sourcebook. Bedford/St. Martin’s, Boston (2013)Google Scholar
  3. 3.
    Povey, D., Ghoshal, A., Boulianne, G., Burget, L., Glembek, O., Goel, N., Hannemann, M., Motlicek, P., Qian, Y., Schwarz, P., Silovsky, J., Stemmer, G., Vesely, K.: The Kaldi speech recognition toolkit. In: IEEE 2011 Workshop on Automatic Speech Recognition and Understanding. IEEE Signal processing Society (2011)Google Scholar
  4. 4.
    Kumar, K.: Hindi speech recognition system using HTK. Int. J. Comput. Bus. Res. 2, 2229–6166 (2011)Google Scholar
  5. 5.
    Hidden Markov Model Toolkit (HTK) speech Recognition. http://htk.eng.cam.ac.uk/develop/atk.shtml
  6. 6.
    Suarez, J., Murphy, R.: Hand gesture recognition with depth images: a review. In: 2012 IEEE RO-MAN, pp. 411–417. IEEE (2012)Google Scholar
  7. 7.
    Biswas, K.K., Basu, S.K.: Gesture recognition using Microsoft Kinect. In: 2011 5th International Conference on Automation, Robotics and Applications (ICARA), pp. 100–103 (2011)Google Scholar
  8. 8.
    Drossis, G., Grammenos, D., Birliraki, C., Stephanidis, C.: MAGIC: developing a multimedia gallery supporting mid-air gesture-based interaction and control. In: Stephanidis, C. (ed.) HCI International 2013 - Posters’ Extended Abstracts. Springer, Heidelberg (2013)Google Scholar
  9. 9.
    Drossis, G., Grammenos, D., Adami, I., Stephanidis, C.: 3D visualization and multimodal interaction with temporal information using timelines. In: Kotzé, P., Marsden, G., Lindgaard, G., Wesson, J., Winckler, M. (eds.) INTERACT 2013, Part III. LNCS, vol. 8119, pp. 214–231. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  10. 10.
    Grzejszczak, T., Mikulski, M., Szkodny, T., Jędrasiak, K.: Gesture based robot control. In: Bolc, L., Tadeusiewicz, R., Chmielewski, L.J., Wojciechowski, K. (eds.) ICCVG 2012. LNCS, vol. 7594, pp. 407–413. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  11. 11.
    Grammenos, D., Zabulis, X., Michel, D., Sarmis, T., Georgalis, G., Tzevanidis, K., Argyros, A., Stephanidis, C.: Design and development of four prototype interactive edutainment exhibits for museums. In: Stephanidis, C. (ed.) Universal Access in HCI, Part III, HCII 2011. LNCS, vol. 6767, pp. 173–182. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  12. 12.
    Ruiz, J., Li, Y., Lank, E.: User-defined motion gestures for mobile interaction. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 197–206. ACM, 2011Google Scholar
  13. 13.
    Kray C., Nesbitt D., Dawson J., Rohs M.: User-defined gestures for connecting mobile phones, public displays, and tabletops. In: Proceedings of the 12th International Conference on Human Computer Interaction with Mobile Devices and Services, pp. 239–248. ACM (2010)Google Scholar
  14. 14.
    Hartholt, A., Traum, D., Marsella, S.C., Shapiro, A., Stratou, G., Leuski, A., Morency, L.-P., Gratch, J.: All together now, introducing the virtual human toolkit. In: Aylett, R., Krenn, B., Pelachaud, C., Shimodaira, H. (eds.) IVA 2013. LNCS, vol. 8108, pp. 368–381. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  15. 15.
    Virtual Human Toolkit. https://vhtoolkit.ict.usc.edu
  16. 16.
    Cassell, J., Prevost, S., Sullivan, J., Churchill, E.: Embodied Conversational Agents. MIT Press, Cambridge (2000)Google Scholar
  17. 17.
    Swartout, W., et al.: Ada and Grace: toward realistic and engaging virtual museum guides. In: Allbeck, J., Badler, N., Bickmore, T., Pelachaud, C., Safonova, A. (eds.) IVA 2010. LNCS, vol. 6536, pp. 286–300. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  18. 18.
    Baldassarri, S., Cerezo, E., Seron, F.J.: Maxine: a platform for embodied animated agents. Comput. Graph. 32(4), 430–437 (2008)CrossRefGoogle Scholar
  19. 19.
    Zhang, H., Fricker, D., Smith, T.G., Yu, C.: Real-time adaptive behaviors in multimodal human-avatar interactions. In: International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, p. 4. ACM (2010)Google Scholar
  20. 20.
    Drossis, G., Grammenos, D., Bouhli, M., Adami, I., Stephanidis, C.: Comparative evaluation among diverse interaction techniques in three dimensional environments. In: Streitz, N., Stephanidis, C. (eds.) DAPI 2013. LNCS, vol. 8028, pp. 3–12. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  21. 21.
    Brooke, J.: SUS-a quick and dirty usability scale. Usability Eval. Ind. 189, 4–7 (1996)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Institute of Computer ScienceFoundation for Research and Technology - Hellas (FORTH)HeraklionGreece
  2. 2.Computer Science DepartmentUniversity of CreteHeraklionGreece

Personalised recommendations