Advertisement

Abstract

With new technologies towards medical cyber-physical systems, such as networked head-mounted displays (HMDs) and eye trackers, new interaction opportunities arise for real-time interaction between cyber-physical systems and users. This leads to cyber-physicial environments in which the user has an active role to play inside the cyber-physical system. With our medical application in the context of a cancer screening programme, we are combining active speech based input, passive/active eye tracker user input, and HMD output (all devices are on-body and hands-free) in a convenient way for both the patient and the doctor inside such a medical cyber-physical system. In this paper, we discuss the design and implementation of our resulting Medical Multimodal Cyber-Physical Environment and focus on how situation awareness provided by the environmental sensors effectively leads to an augmented cognition application for the doctor.

Keywords

Augmented Reality Activity Recognition Situation Awareness Dialogue System Multimodal Interaction 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bellegarda, J.: Spoken Language Understanding for Natural Interaction: The Siri Experience. In: Mariani, J., Rosset, S., Garnier-Rizet, M., Devillers, L. (eds.) Natural Interaction with Robots, Knowbots and Smartphones, pp. 3–14. Springer, New York (2014)CrossRefGoogle Scholar
  2. 2.
    Birkfellner, W., Huber, K., Watzinger, F., Figl, F., Wanschitz, F., Hanel, R., Rafolt, D., Ewers, R., Bergmann, H.: Development of the Varioscope AR. A see-through HMD for computer-aided surgery. In: ISAR, pp. 54–59. IEEE (2000)Google Scholar
  3. 3.
    Guo, J., Feng, G.: How Eye Gaze Feedback Changes Parent-Child Joint Attention in Shared Storybook Reading? In: Nakano, Y.I., Conati, C., Bader, T. (eds.) Eye Gaze in Intelligent User Interfaces, pp. 9–21. Springer, London (2013)CrossRefGoogle Scholar
  4. 4.
    Ilie, A., Low, K.-L., Welch, G., Lastra, A., Fuchs, H., Cairns, B.: Combining Head-mounted and Projector-based Displays for Surgical Training. Presence: Teleoper. Virtual Environ. 13(2), 128–145 (2004)CrossRefGoogle Scholar
  5. 5.
    Keller, K., State, A., Fuchs, H.: Head mounted displays for medical use. J. Display Technol. 4(4), 468–472 (2008)CrossRefGoogle Scholar
  6. 6.
    Land, M.F., Hayhoe, M.: In what ways do eye movements contribute to everyday activities? Vision Research 41(2526), 3559–3565 (2001)CrossRefGoogle Scholar
  7. 7.
    Liu, C., Fang, R., Chai, J.: Shared Gaze in Situated Referential Grounding: An Empirical Study. In: Nakano, Y.I., Conati, C., Bader, T. (eds.) Eye Gaze in Intelligent User Interfaces, pp. 23–39. Springer, London (2013)CrossRefGoogle Scholar
  8. 8.
    Nakano, Y.I., Conati, C., Bader, T. (eds.): Eye Gaze in Intelligent User Interfaces. Springer, London (2013)Google Scholar
  9. 9.
    Orlosky, J., Toyama, T., Sonntag, D., Sarkany, A., Lorincz, A.: On-body Multi-Input Indoor Localization for Dynamic Emergency Scenarios: Fusion of Magnetic Tracking and Optical Character Recognition with Mixed-Reality Displays. In: Proceedings of Pernem/Percom (2014)Google Scholar
  10. 10.
    Paletta, L., Santner, K., Fritz, G.: An Integrated System for 3D Gaze Recovery and Semantic Analysis of Human Attention. CoRR, abs/1307.7848 (2013)Google Scholar
  11. 11.
    Prange, A., Sonntag, D.: Smartphone pen sketch recognition in breast imaging for instant knowledge acquisition. In: Sketch: Pen and Touch Recognition Workshop in Conjunction with IUI (2014)Google Scholar
  12. 12.
    Prasov, Z., Chai, J.Y.: What’s in a gaze?: the role of eye-gaze in reference resolution in multimodal conversational interfaces. In: Proceedings of the 13th International Conference on Intelligent User Interfaces, pp. 20–29. ACM, New York (2008)CrossRefGoogle Scholar
  13. 13.
    Sellen, A.J., Harper, R.H.: The Myth of the Paperless Office. MIT Press, Cambridge (2003)Google Scholar
  14. 14.
    Sonntag, D.: Ontologies and Adaptivity in Dialogue for Question Answering. AKA and IOS Press, Heidelberg (2010)zbMATHGoogle Scholar
  15. 15.
    Sonntag, D.: Collaborative multimodality. KI - Künstliche Intelligenz 26(2), 161–168 (2012)CrossRefGoogle Scholar
  16. 16.
    Sonntag, D., Engel, R., Herzog, G., Pfalzgraf, A., Pfleger, N., Romanelli, M., Reithinger, N.: SmartWeb Handheld—Multimodal Interaction with Ontological Knowledge Bases and Semantic Web Services. In: Huang, T.S., Nijholt, A., Pantic, M., Pentland, A. (eds.) AI for Human Computing, LNCS (LNAI), vol. 4451, pp. 272–295. Springer, Heidelberg (2007)Google Scholar
  17. 17.
    Sonntag, D., Liwicki, M., Weber, M.: Interactive paper for radiology findings. In: Proceedings of the 16th International Conference on Intelligent User Interfaces, IUI 2011, pp. 459–460. ACM, New York (2011)Google Scholar
  18. 18.
    Sonntag, D., Reithinger, N., Herzog, G., Becker, T.: A Discourse and Dialogue Infrastructure for Industrial Dissemination. In: Lee, G.G., Mariani, J., Minker, W., Nakamura, S. (eds.) IWSDS 2010. LNCS (LNAI), vol. 6392, pp. 132–143. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  19. 19.
    Sonntag, D., Schulz, C.: A multimodal multi-device discourse and dialogue infrastructure for collaborative decision-making in medicine. In: Mariani, J., Rosset, S., Garnier-Rizet, M., Devillers, L. (eds.) Natural Interaction with Robots, Knowbots and Smartphones, pp. 37–47. Springer, New York (2014)CrossRefGoogle Scholar
  20. 20.
    Sonntag, D., Schulz, C., Reuschling, C., Galarraga, L.: Radspeech’s mobile dialogue system for radiologists. In: Proceedings of the 2012 ACM International Conference on Intelligent User Interfaces, IUI 2012, pp. 317–318. ACM, New York (2012)CrossRefGoogle Scholar
  21. 21.
    Sonntag, D., Toyama, T.: On-Body IE: A Head-Mounted Multimodal Augmented Reality System for Learning and Recalling Faces. In: 9th International Conference on Intelligent Environments (IE), pp. 151–156 (2013)Google Scholar
  22. 22.
    Sonntag, D., Weber, M., Hammon, M., Cavallaro, A.: Integrating digital pens in breast imaging for instant knowledge acquisition. In: Proceedings of the Innovative Applications of Artificial Intelligence Conference, IAAI (2013)Google Scholar
  23. 23.
    Sonntag, D., Zillner, S., Schulz, C., Weber, M., Toyama, T.: Towards medical cyber-physical systems: Multimodal augmented reality for doctors and knowledge discovery about patients. In: Marcus, A. (ed.) DUXU/HCII 2013, Part III. LNCS, vol. 8014, pp. 401–410. Springer, Heidelberg (2013)Google Scholar
  24. 24.
    Toyama, T., Sonntag, D., Dengel, A., Matsuda, T., Iwamura, M., Kise, K.: A Mixed Reality Head-Mounted Text Translation System Using Eye Gaze Input. In: Proceedings of the International Conference on Intelligent User Interfaces, IUI 2014. ACM, New York (2014)Google Scholar
  25. 25.
    Zhang, Q., Imamiya, A., Go, K., Mao, X.: Overriding errors in a speech and gaze multimodal architecture. In: Proceedings of the 9th International Conference on Intelligent User Interfaces, IUI 2004, pp. 346–348. ACM, New York (2004)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Daniel Sonntag
    • 1
  1. 1.German Research Center for AI (DFKI)SaarbrueckenGermany

Personalised recommendations