Multimodal Dialogue for Ambient Intelligence and Smart Environments


Ambient Intelligence (AmI) and Smart Environments (SmE) are based on three foundations: ubiquitous computing, ubiquitous communication and intelligent adaptive interfaces [41]. This type of systems consists of a series of interconnected computing and sensing devices which surround the user pervasively in his environment and are invisible to him, providing a service that is dynamically adapted to the interaction context, so that users can naturally interact with the system and thus perceive it as intelligent.


Dialogue System Ambient Intelligence Context Awareness Multimodal Interface Smart Environment 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Batliner A, Hacker C, Steidl S, Nöth E, D’Arcy S, Russel M, Wong M (2004) Towards multilingual speech recognition using data driven source/target acoustical units association. In: Proc. of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP’04), Montreal, Quebec, Canada, pp 521–524Google Scholar
  2. [2]
    Beringer N, Karal U, Louka K, Schiel F, Türk U (2002) PROMISE A procedure for multimodal interactive system evaluation. In: Proc. of the LREC Workshop on Multimodal Resources and Multimodal Systems Evaluation, Las Palmas, Spain, pp 77–80Google Scholar
  3. [3]
    Beringer N, Louka K, Penide-Lopez V, Türk U (2002) End-to-end evaluation of multimodal dialogue systems - can we transfer established methods? In: Proc. of the LREC Workshop on Multimodal Resources and Multimodal Systems Evaluation, Las Palmas, Spain, pp 558–563Google Scholar
  4. [4]
    Bernsen N (2003) User modelling in the car. Lecture Notes in Artificial Intelligence pp 378–382Google Scholar
  5. [5]
    Berre AJ, Marzo GD, Khadraoui D, Charoy F, Athanasopoulos G, Pantazoglou M, Morin JH, Moraitis P, Spanoudakis N (2007) SAMBA - an agent architecture for ambient intelligence elements interoperability. In: Proc. of Third International Conference on Interoperability of Enterprise Software and Applications, Funchal, Madeira, PortugalGoogle Scholar
  6. [6]
    Bickmore T, Mauer D, Brown T (2008) Context awareness in a handheld exercise agent. Pervasive and Mobile Computing Doi:10.1016/j.pmcj.2008.05.004. In pressGoogle Scholar
  7. [7]
    Bouzy B, Cazenave T (1997) Using the object oriented paradigm to model context in computer Go. In: Proc. of Context’97, Rio, BrazilGoogle Scholar
  8. [8]
    Bricon-Souf N, Newman CR (2007) Context awareness in health care: A review. International journal of medical informatics 76:2–12CrossRefGoogle Scholar
  9. [9]
    Callejas Z, López-Cózar R (2008) Influence of contextual information in emotion annotation for spoken dialogue systems. Speech Communication 50(5):416–433CrossRefGoogle Scholar
  10. [10]
    Carpenter R (1992) The logic of typed feature structures. Cambridge University Press, Cambridge, EnglandMATHCrossRefGoogle Scholar
  11. [11]
    Corradini A, Mehta M, Bernsen N, Martin J, Abrilian S (2003) Multimodal input fusion in human-computer interaction. In: Proc. of the NATO-ASI Conference on Data Fusion for Situation Monitoring, Incident Detection, Alert and Response Management, Yerevan, ArmeniaGoogle Scholar
  12. [12]
    Dale R, Moisl H, Somers H (eds) (2000) Handbook of natural language processing. Dekker PublishersGoogle Scholar
  13. [13]
    Daubias P, Deléglise P (2002) Lip-reading based on a fully automatic statistical model. In: Proc. of International Conference on Speech and Language Processing, Denver, Colorado, US, pp 209–212Google Scholar
  14. [14]
    Dey A, Abowd G (1999) The context toolkit: Aiding the development of context-enabled applications. In: Proc. of the SIGCHI conference on Human factors in computing systems (CHI 99), Pittsburgh, Pennsylvania, US, pp 434–441Google Scholar
  15. [15]
    Dey A, Abowd G (2000) Towards a better understanding of context and context-awareness. In: Proc. of the 2000 Conference on Human Factors in Computer Systems (CHI’00), pp 304–307Google Scholar
  16. [16]
    Doulkeridis C, Vazirgiannis M (2008) CASD: Management of a context-aware service directory. Pervasive and mobile computing Doi:10.1016/j.pmcj.2008.05.001. In pressGoogle Scholar
  17. [17]
    Dutoit T (1996) An introduction to text-to-speech synthesis. Kluwer Academic PublishersGoogle Scholar
  18. [18]
    Dybkjaer L, Bernsen N, Minker W (2004) Evaluation and usbility of multimodal spoken language dialogue systems. Speech Communication 43:33–54CrossRefGoogle Scholar
  19. [19]
    Encarnaçao J, Kirste T (2005) Ambient intelligence: Towards smart applicance ensembles. In: From Integrated Publication and Information Systems to Virtual Information and Knowledge Environments, pp 261–270Google Scholar
  20. [20]
    Engelmore R, Mogan T (1988) Blackboard systems. Addison-WesleyGoogle Scholar
  21. [21]
    Forbes-Riley K, Litman D (2004) Modelling user satisfaction and student learning in a spoken dialogue tutoring system with generic, tutoring, and user affect parameters. In: Proc. of the Human Language Technology Conference - North American chapter of the Association for Computational Linguistics annual meeting (HLT-NAACL’06), New York, US, pp 264–271Google Scholar
  22. [22]
    Fraser M, Gilbert G (1991) Simulating speech systems. Computer Speech and Language 5:81–99CrossRefGoogle Scholar
  23. [23]
    Gárate A, Herrasti N, López A (2005) Genio: An ambient intelligence application in home automatation and entertainment environment. In: Proc. of Joint soc-EUSI Conference, pp 241–245Google Scholar
  24. [24]
    Gaver WW (1992) Using and creating auditory icons. SFI studies in the sciences of complexity, Addison Wesley Longman, URL Proceedings/1992/Gaver1992.pdfGoogle Scholar
  25. [25]
    Georgalas N, Ou S, Azmoodeh M, Yang K (2007) Towards a model-driven approach for ontology-based context-aware application development: a case study. In: Proc. of the fourth International Workshop on Model-Based Methodologies for Pervasive and Embedded Software (MOMPES ’07), Braga, Portugal, pp 21–32Google Scholar
  26. [26]
    Gustafson J, Bell L, Beskow J, Boye J, Carlson R, Edlund J, Granstrom B, House D, Wirén M (2000) Adapt - a multimodal conversational dialogue system in an apartment domain. In: Proc. of International Conference on Speech and Language Processing, Beijing, China, pp 134–137Google Scholar
  27. [27]
    Haseel L, Hagen E (2005) Adaptation of an automotive dialogue system to users’ expertise. In: Proc. of 9th International Conference on Spoken Language Processing (Interspeech’05-Eurospeech), Lisbon, Portugal, pp 222–226Google Scholar
  28. [28]
    Heim J, Nilsson E, Havard J (2007) User Profiles for Adapting Speech Support in the Opera Web Browser to Disabled Users. Lecture Notes in Computer Science 4397:154–172CrossRefGoogle Scholar
  29. [29]
    Hengartner U, Steenkiste P (2006) Avoiding privacy violations caused by context-sensitive services. Pervasive and mobile computing 2:427–452CrossRefGoogle Scholar
  30. [30]
    Henricksen K, Indulska J (2006) Developing context-aware pervasive computing applications: models and approach. Pervasive and mobile computing 2:37–64CrossRefGoogle Scholar
  31. [31]
    Henricksen K, Indulska J, Rakotonirainy A (2002) Modeling context information in pervasive computing systems. In: Proc. of the First International Conference on Pervasive Computing, pp 167–180Google Scholar
  32. [32]
    Ho J, Intille S (2005) Using context-aware computing to reduce the perceived burden of interruptions from mobile devices. In: Proc. of the 2005 Conference on Human Factors in Computer Systems (CHI’05), Portland, US, pp 909–918Google Scholar
  33. [33]
    Hovy EH (1993) Automated discourse generation using discourse relations. Artificial Intelligence, Special Issue on Natural Language Processing 63:341–385Google Scholar
  34. [34]
    Intille S, Larson K, Munguia E (2003) Designing and evaluating technology for independent aging in the home. In: Proc. of the International Conference on Aging, Disability and IndependenceGoogle Scholar
  35. [35]
    Intille S, Larson K, Beaudin J, Nawyn J, Tapia EM, Kaushik P (2005) A living laboratory for the design and evaluation of ubiquitous computing technologies. In: Proc. of the 2005 Conference on Human Factors in Computer Systems (CHI’05), Portland, Oregon, US, pp 1941–1944Google Scholar
  36. [36]
    Johnston M, Bangalore S, Vasireddy G, Stent A, Ehlen P, Walker M, Whittaker S, Maloor P (2002) Match: An architecture for multimodal dialogue systems. In: Proc. of Association for Computational Linguistics, Pennsylvania, Philadelphia, US, pp 376–383Google Scholar
  37. [37]
    Jokinen K (2003) Natural interaction in spoken dialogue systems. In: Proc. of the Workshop Ontologies and Multilinguality in User Interfaces, Crete, Greece, pp 730–734Google Scholar
  38. [38]
    Kang H, Suh E, Yoo K (2008) Packet-based context aware system to determine information system user’s context. Expert systems with applications 35:286–300Google Scholar
  39. [39]
    Kettebekov S, Sharma R (2000) Understanding gestures in multimodal human computer interaction. Int Journal on Artificial Intelligence Tools 9(2):205–223CrossRefGoogle Scholar
  40. [40]
    Korth A, Plumbaum T (2007) A framework for ubiquitous user modelling. In: Proc. of IEEE International Conference on Information Reuse and Integration, Las Vegas, Nevada, US, pp 291–297Google Scholar
  41. [41]
    Kovács GL, Kopácsi S (2006) Some aspects of ambient intelligence. Acta Polytechnica Hungarica 3(1):35–60Google Scholar
  42. [42]
    Kwon O, Sadeh N (2004) Applying case-based reasoning and multi-agent intelligent system to context-aware comparative shopping. Decision Support Systems 37:199–213Google Scholar
  43. [43]
    Langner B, Black A (2005) Using speech in noise to improve understandability for elderly listeners. In: Proc. of IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU’05), San Juan, Puerto Rico, pp 392–396Google Scholar
  44. [44]
    Lemon O, Bracy A, Gruenstein A, Peters S (2001) The WITAS Multi-Modal Dialogue System I. In: Proc. of Interspeech, Aalborg, Denmark, pp 1559–1562Google Scholar
  45. [45]
    Levin E, Levin A (2006) Dialog design for user adaptation. In: Proc. of the International Conference on Acoustics Speech Processing, Toulouse, France, pp 57–60Google Scholar
  46. [46]
    López-Cózar R, Araki M (2005) Spoken, Multilingual and Multimodal Dialogue Systems: Development and Assessment. John Wiley SonsGoogle Scholar
  47. [47]
    López-Cózar R, Callejas Z, Montoro G (2006) DS-UCAT: A new multimodal dialogue system for an academic application. In: Proc. of Interspeech ’06 - Satellite Workshop Dialogue on Dialogues, Multidisciplinary Evaluation of Advanced Speech-Based Interactive Systems, Pittsburgh, Pennsylvania, US, pp 47–50Google Scholar
  48. [48]
    Markopoulos P, de Ruyter B, Privender S, van Breemen A (2005) Case study: bringing social intelligence into home dialogue systems. Interactions 12(4):37–44CrossRefGoogle Scholar
  49. [49]
    Martínez AE, Cabello R, Gómez FJ, Martínez J (2003) INTERACT-DM. A solution for the integration of domestic devices on network management platforms. In: Proc. of IFIP/IEEE International Symposium on Integrated Network Management, Colorado Springs, Colorado, US, pp 360–370Google Scholar
  50. [50]
    Martinovski B, Traum D (2003) Breakdown in human-machine interaction: the error is the clue. In: Proc. of the ISCA Tutorial and Research Workshop on Error Handling in Dialogue Systems, Chateau d’Oex, Vaud, Switzerland, pp 11–16Google Scholar
  51. [51]
    McAllister D, Rodman R, Bitzer D, Freeman A (1997) Lip synchronization of speech. In: Proc. of ESCA Workshop on Audio-Visual Speech Processing (AVSP’97), Kasteel Groenendael, Hilvarenbeek, The Netherlands, pp 133–136Google Scholar
  52. [52]
    Möller S, Krebber J, Raake A, Smeele P, Rajman M, Melichar M, Pallotta V, Tsakou G, Kladis B, Vovos A, Hoonhout J, Schuchardt D, Fakotakis N, Ganchev T, Potamitis I (2004) INSPIRE: Evaluation of a Smart-Home System for Infotainment Management and Device Control. In: Proc. of the International Conference on Language Resources and Evaluation (LREC), Lisbon, Portugal, pp 1603–1606Google Scholar
  53. [53]
    Möller S, Krebber J, Smeele P (2006) Evaluating the speech output component of a smart-home system. Speech Communication 48:1–27CrossRefGoogle Scholar
  54. [54]
    Montoro G, Alamán X, Haya P (2004) A plug and play spoken dialogue interface for smart environments. In: Proc. of Fifth International Conference on Intelligent Text Processing and Computational Linguistics (CICLing’04), Seoul, South Korea, pp 360–370Google Scholar
  55. [55]
    Nazari AA (2005) A Generic UPnP Architecture for Ambient Intelligence Meeting Rooms and a Control Point allowing for Integrated 2D and 3D Interaction. In: Proc. of Joint Conference on Smart Objects and Ambient Intelligence: Innovative Context-Aware Services, USges and Technologies, pp 207–212Google Scholar
  56. [56]
    Ndiaye A, Gebhard P, Kipp M, Klessen M, Schneider M, Wahlster W (2005) Ambient intelligence in edutainment: Tangible interaction with life-like exhibit guides. Lecture Notes in Artificial Intelligence 3814:104–113Google Scholar
  57. [57]
    Nigay L, Coutaz J (1995) A generic platform for addressing the multimodal challenge. In: Proc. of the SIGCHI Conference on Human Factors in Computing Systems, ACM, Denver, Colorado, US, pp 98–105Google Scholar
  58. [58]
    Ohno T, Mukawa N, Kawato S (2003) Just blink your eyes: A head-free gaze tracking system. In: Proc. of Computer-Human Interaction, Fort Lauderdale, Florida, pp 950–951Google Scholar
  59. [59]
    Pascoe J (1997) The Stick-e note architecture: Extending the interface beyond the user. In: Proc. of the International Conference on Intelligent User Interfaces, Orlando, Florida, US, pp 261–264Google Scholar
  60. [60]
    Porzel R, Gurevych I (2002) Towards context-sensitive utterance interpretation. In: Proc. of the 3rd SIGdial Workshop on Discourse and Dialogue, Philadelphia, US, pp 154–161Google Scholar
  61. [61]
    Prendinger H, Mayer S, Mori J, Ishizuka M (2003) Persona effect revisited. using bio-signals to measure and reflect the impact of character-based interfaces. In: Proc. of the 4th International Working Conference on Intelligent Virtual Agents (IVA’03), Kloster Irsee, Germany, pp 283–291Google Scholar
  62. [62]
    Rabiner LR, Juang BH (1993) Fundamentals of Speech Recognition. Prentice-HallGoogle Scholar
  63. [63]
    Reithinger N, Lauer C, Romary L (2002) MIAMM - Multidimensional information access using multiple modalities. In: Proc. of International CLASS workshop on natural intelligent and effective interaction in multimodal dialogue systemsGoogle Scholar
  64. [64]
    de Rosis F, Novielli N, Carofiglio V, Cavalluzzi A, de Carolis B (2006) User modeling and adaptation in health promotion dialogs with an animated character. Journal of Biomedical Informatics 39:514–531CrossRefGoogle Scholar
  65. [65]
    Sachetti D, Chibout R, Issarny V, Cerisara C, Landragin F (2004) Seamless access to mobile services for the mobile user. In: Proc. of IEEE Int. Conference on Software Engineering, Beijing, China, pp 801–804Google Scholar
  66. [66]
    Saini P, de Ruyter B, Markopoulos P, Breemen AV (2005) Benefits of social intelligence in home dialogue systems. In: Proc. of 11th International Conference on Human-Computer Interaction, Las Vegas, Nevada, US, pp 510–521Google Scholar
  67. [67]
    Satyanarayanan M (2002) Challenges in implementing a context-aware system. IEEE Distributed Systems Online 3(9)Google Scholar
  68. [68]
    Schmidt A (2002) Ubiquitous computing - computing in context. PhD thesis, Lancaster UniversityGoogle Scholar
  69. [69]
    Schneider M (2004) Towards a Transparent Proactive User Interface for a Shopping Assistant. In: Proc. of Workshop on Multi-User and Ubiquitous User Interfaces (MU3I), Funchal, Madeira, Portugal, vol 3, pp 10–15Google Scholar
  70. [70]
    Shimoga KB (1993) A survey of perceptual feedback issues in Dexterous telemanipulation: Part II. Finger Touch Feedback. In: Proc. of the IEEE Virtual Reality Annual International Symposium, Piscataway, NJ, IEEE Service CenterGoogle Scholar
  71. [71]
    Strang T, Linnhoff-popien C (2004) A context modeling survey. In: Proc. of Workshop on Advanced Context Modelling, Reasoning and Management, UbiComp 2004, Nottingham, EnglandGoogle Scholar
  72. [72]
    Wahlster W (2002) Smartkom: Fusion and fission of speech, gestures, and facial expressions. In: Proc. of First International Workshop on Man-Machine Symbiotic Systems, pp 213–225Google Scholar
  73. [73]
    Wahlster W (ed) (2006) SmartKom: Foundations of Multimodal Dialogue Systems. SpringerGoogle Scholar
  74. [74]
    Walker M, Cahn J, Whittaker S (1997) Improvising linguistic style: Social and affective bases of agent personality. In: Proc. of the 1st International Conference on Autonomous Agents (Agents’97), Marina del Rey, CA, US, pp 96–105CrossRefGoogle Scholar
  75. [75]
    Walker M, Litman D, Kamm C, Abella A (1998) Evaluating spoken dialogue agents with PARADISE: Two case studies. Computer Speech and Language 12(4):317–347CrossRefGoogle Scholar
  76. [76]
    Whittaker S, Walker M (2005) Evaluating dialogue strategies in multimodal dialogue systems. In: Minker W, Buehler D, Dybkjaer L (eds) Spoken Multimodal Human-Computer Dialogue in Mobile Environments, KluwerGoogle Scholar
  77. [77]
    Yang J, Stiefelhagen R, Meier U, Waibel A (1998) Real-time face and facial feature tracking and applications. In: Proc. of Workshop on audio-visual speech processing, pp 79–84Google Scholar
  78. [78]
    Yasuda H, Takahashi K, Matsumoto T (2000) A discrete HMM for online handwriting recognition. Pattern Recognition and Artificial Intelligence 14(5):675–689CrossRefGoogle Scholar
  79. [79]
    Zuckerman O, Maes P (2005) Awareness system for children in distributed families. In: Proc. of the 2005 International Conference on Interaction design for children (IDC 2005), Boulder, Colorado, USGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  1. 1.Dept. of Languages and Computer Systems, Faculty of Computer Science and TelecommunicationsUniversity of GranadaSpain

Personalised recommendations