Skip to main content

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. G Ball et al. Lifelike computer characters: The Persona project at Microsoft Research. In: J M Bradshaw (Editor). Software Agents. AAAI Press / MIT Press, 1997.

    Google Scholar 

  2. C Benoit and R Campbell (Editors). Proc Workshop on Audio-Visual Speech Processing, 1997.

    Google Scholar 

  3. R A Bolt. Put-That-There: Voice and gesture in the graphics interface. Proc Annual Conf on Computer Graphics and Interactive Techniques, 1980.

    Google Scholar 

  4. J Cassell et al. Embodiment in conversational interfaces: Rea. Proc ACM CHI Conf, 1999.

    Google Scholar 

  5. A C Clarke. 2001: A Space Odyssey, New American Library, 1999 (reissue).

    Google Scholar 

  6. P Cohen et al. Synergistic use of direct manipulation and natural language. Proc Conf on Human Factors in Computing Systems, 1989.

    Google Scholar 

  7. P R Cohen et al. QuickSet: Multimodal interaction for simulation set-up and control. Proc Applied Natural Language Processing Meeting, 1997.

    Google Scholar 

  8. P R Cohen et al. The efficiency of multimodal interaction for a map-based task. Proc Applied Natural Language Processing Conf, 2000.

    Google Scholar 

  9. A Corradini et al. A map-based system using speech and 3D gestures for pervasive computing. Proc IEEE Int Conf on Multimodal Interfaces, 2002.

    Google Scholar 

  10. A Dix et al. Human-Computer Interaction, Second Edition. Prentice Hall, 1998.

    Google Scholar 

  11. J-L Dugelay et al. (Editors). Proc Workshop on Multimodal User Authentication, 2003. http://mmua.cs.ucsb.edu

    Google Scholar 

  12. A Kobsa et al. Combining deictic gestures and natural language for referent identification. Proc Int Conf on Computational Linguistics, 1986.

    Google Scholar 

  13. S Kopp et al. Towards integrated microplanning of language and iconic gesture for multimodal output. Proc Int Conf on Multimodal Interfaces, 2004.

    Google Scholar 

  14. P Maes et al. Intelligent software agents vs user-controlled direct manipulation: A debate. CHI-97 Extended Abstracts: Panels, 1997.

    Google Scholar 

  15. M Maybury and W Wahlster. Readings in Intelligent User Interfaces. Morgan Kaufmann, 1998.

    Google Scholar 

  16. L-P Morency and T Darrell. From conversational tooltips to grounded discourse: Head pose tracking in interactive dialog systems. Proc Int Conf on Multimodal Interfaces, 2004.

    Google Scholar 

  17. D A Norman. The Invisible Computer. MIT Press, 1998.

    Google Scholar 

  18. S Oviatt. Multimodal interfaces. In: J Jacko and A Sears (Editors). Handbook of Human-Computer Interaction. Lawrence Erlbaum, 2002.

    Google Scholar 

  19. S L Oviatt. Ten myths of multimodal interaction. Comm ACM, pp 74–81, 1999.

    Google Scholar 

  20. S L Oviatt et al. Designing the user interface for multimodal speech and gesture applications: State-of-the-art systems and research directions. Human-Computer Interaction, pp 263–322, 2000.

    Google Scholar 

  21. S Oviatt et al. Multimodal interfaces that flex, adapt, and persist. Comm ACM, pp 30–33, 2004.

    Google Scholar 

  22. S Oviatt and W Wahlster (Editors). Human-Computer Interaction, Lawrence Erlbaum Associates, 1997.

    Google Scholar 

  23. I Poddar et al. Toward natural speech/gesture HCI: A case study of weather narration. Proc PUI Workshop, 1998.

    Google Scholar 

  24. R Pieraccini et al.A multimodal conversational interface for a concept vehicle. Proc Eurospeech, 2003.

    Google Scholar 

  25. G Potamianos et al. Joint audio-visual speech processing for recognition and enhancement. Proc Auditory-Visual Speech Processing Tutorial and Research Workshop, 2003.

    Google Scholar 

  26. G Potamianos et al. Recent advances in the automatic recognition of audio-visual speech. Proceedings of the IEEE, pp 1306–1326, 2003.

    Google Scholar 

  27. L Reeves et al. Guidelines for multimodal user interface design. Comm ACM, 2004.

    Google Scholar 

  28. J Regh et al. Vision for a smart kiosk. Proc CVPR, 1997.

    Google Scholar 

  29. A Ross and A K Jain. Multimodal biometrics: An overview. Proc European Signal Processing Conference, 2004.

    Google Scholar 

  30. B Shneiderman. Designing the User Interface: Strategies for Effective Human-Computer Interaction, Third Edition. Addison-Wesley, 1998.

    Google Scholar 

  31. B Shneiderman. The future of interactive systems and the emergence of direct manipulation. Behaviour and Information Technology, pp 237–256, 1982.

    Google Scholar 

  32. B Shneiderman. A nonanthropomorphic style guide: Overcoming the Humpty Dumpty syndrome. The Computing Teacher, 1989.

    Google Scholar 

  33. D Stork and M Hennecke (Editors). Speechreading by Humans and Machines: Models, Systems, and Applications. Springer-Verlag, 1996.

    Google Scholar 

  34. M Turk (Editor). Proc Workshop on Perceptual User Interfaces, 1998. http://cs.ucsb.edu/conferences/PUI/PUIWorkshop98/PUI98.htm

    Google Scholar 

  35. M Turk and M Kölsch. Perceptual interfaces. In: G Medioni and S B Kang (Editors). Emerging Topics in Computer Vision. Prentice Hall, 2004.

    Google Scholar 

  36. M Turk and G Robertson. Perceptual User Interfaces. Comm ACM, pp 33–34, 2000.

    Google Scholar 

  37. A van Dam. Post-wimp user interfaces. Comm ACM, pp 63–67, 1997.

    Google Scholar 

  38. V van Wassenhove et al. Visual speech speeds up the neural processing of auditory speech. Proc National Academy of Sciences, pp 1181–1186, 2005.

    Google Scholar 

  39. M T Vo and C Wood. Building an application framework for speech and pen integration in multimodal learning interfaces. Proc ICASSP, 1996.

    Google Scholar 

  40. W Wahlster. User and discourse models for multimodal communication. In: J Sullivan and S Tyler (Editors). Intelligent User Interfaces. ACM Press, 1991.

    Google Scholar 

  41. A Wexelblat. Don’t make that face: A report on anthropomorphizing an interface. In: M H Coen (Editor). Intelligent Environments. AAAI Technical Report SS-98-02, AAAI Press, 1998.

    Google Scholar 

  42. L Wu et al. Multimodal integration — A statistical view. IEEE Trans Multimedia, pp 334–341, 1999.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer Science+Business Media, Inc.

About this chapter

Cite this chapter

Turk, M. (2005). Multimodal Human-Computer Interaction. In: Kisačanin, B., Pavlović, V., Huang, T.S. (eds) Real-Time Vision for Human-Computer Interaction. Springer, Boston, MA. https://doi.org/10.1007/0-387-27890-7_16

Download citation

  • DOI: https://doi.org/10.1007/0-387-27890-7_16

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-0-387-27697-7

  • Online ISBN: 978-0-387-27890-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics