Multimodal Input Analysis and Design for a Home Information System

  • Jestís Cardeńosa
  • David Escorial
  • Edmundo Tovar
Part of the Advances in Soft Computing book series (AINSC, volume 7)


Nowadays, we are installing in our homes an increasing number of devices, that are the main access to all kind of information systems. This new generation of digital systems have a common characteristic; they are designed to cover the information needs of the home dwellers. At the same time, they have to possess several ergonomic characteristics. They have to fit in the environment, be easy to use (and to learn how to use), be attractive to the users and, of course, provide the necessary information the users may require. A way to achieve this goal is to offer the users several ways to express their information needs to the system. In this paper we describe the design of a multimodal input subsystem for a home information system. We will discuss how different input modalities as speech, touch and writing can be combined to provide the user many possibilities to perform information-seeking tasks. This paper is based on experiences from the European Commission ESPRIT IV Project no. 29158 FLEX (Flexible Knowledge-based Information Access and Navigation using Multimodal Input / Output).


Speech Recognition Information Seek Multimodal Interface Handwriting Recognition User Task 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Maybury M.T. and W. Walshter W. (eds.), “Readings in Intelligent User Interfaces”. Morgan Kauffman, 1998.Google Scholar
  2. 2.
    Cardenosa J. and al. “Flexible Knowledge-based Information Access and Navigation using Multi-modal Input/Output”. Deliverable 1.2.1 FLEX project web site at
  3. 3.
    Bolt, R.A.: “Put that there: Voice and Gesture at the Graphics Interface”, ACM Computer Graphics (Proceedings Siggraph 80), 14 (3), 262–270, 1980.MathSciNetCrossRefGoogle Scholar
  4. 4.
    Nigay L. and Coutaz, J.: “A Generic Platform for addressing the Multimodal Challenge”. Proceedings 1995 ACM Conference in Human Factors in Computer Systems (CHI 95), ACM Press, 98–105, 1995.Google Scholar
  5. 5.
    Oviatt, S.: “User-Centred Modelling for Spoken Language and Multimodal Interface”. IEEE Multimedia, 3 (4), 26–35, 1996.CrossRefGoogle Scholar
  6. 6.
    Stein A. and Thiel U.:, “A Conversational Model of Multimodal Interaction ”. Proceedings of the 11th National Conference on Artificial Intelligence (AAAI`93), Melo Park: AAAI Press/The MIT Press, 283–288, 1993.Google Scholar
  7. 7.
    Benyon, D., Macaulay, C. and Baillie, L.: “Scenarios and Development of the Prototype Home Information Centre”. To be published in International Journal of Human Computer Studies, special issue on Household Information Technology, 2000Google Scholar
  8. 8.
    Bemsen, N.O., Dybkjaer, L. and Dybkjaer, H.: “Designing Interactive Speech Systems. From first ideas to user testing”. Springer Verlag, 1998.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Jestís Cardeńosa
    • 1
  • David Escorial
    • 1
  • Edmundo Tovar
    • 1
  1. 1.Facultad de InformáticaUniversidad Politécnica de MadridBoadilla del Monte (Madrid)Spain

Personalised recommendations