Abstract
Multimodal user interface capable to perceive speech, movements, poses and gestures of meeting participants in order to determinate their needs provides the natural and intuitively understandable way of interaction with the developed intelligent meeting room. Awareness of the room about spatial position of the participants, their current activities, and roles in a current event, their preferences helps to predict more accurately the intentions and needs of participants. The accomplished integration of Nokia mobile phones with the sensor network and smart services allowed users to control effectors, audio/video and other facilities from outside. Some scenarios of multimodal interaction with the room as well as issues of adaptation of user interface to limitations of mobile phone browser are discussed.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Ducatel, K., Bogdanowicz, M., Scapolo, F., Leijten, J., Burgelman, J.-C.: ISTAG - Scenarios of Ambient Intelligence in 2010. European Commission Community Research (2001)
Aldrich, F.: Smart Homes: Past, Present and Future. In: Harper, R. (ed.) Inside the Smart Home, pp. 17–39. Springer, London (2003)
Gann, D., Venables, T., Barlow, J.: Digital Futures: Making Homes Smarter. Chartered Institute of Housing, Coventry (1999)
Masakowski, Y.: Cognition-Centric Systems Design: A Paradigm Shift in System Design. In: 7th International Conference on Computer and IT Applications in the Maritime Industries, pp. 603–607 (2008)
Degler, D., Battle, L.: Knowledge management in pursuit of performance the challenge of context. Performance Improvement 39(6), 25–31 (2007)
Tzovaras, D. (ed.): Multimodal User Interfaces: From Signals to Interaction. Springer, Heidelberg (2008)
Chai, J., Pan, S., Zhou, M.: MIND: A Context-based Multimodal Interpretation Framework. Kluwer Academic Publishers, Dordrecht (2005)
Ronzhin, A.: Topological peculiarities of morpho-phonemic approach to representation of the vocabulary of Russian speech recognition. Bulletin of Computer and Information Technologies (9), 12–19 (2008)
Wallhoff, F., Zobl, M., Rigoll, G.: Action segmentation and recognition in meeting room scenarios. In: International Conference on Image Processing (ICIP 2004), pp. 2223–2226 (2004)
Lobanov, B., Tsirulnik, L., Železný, M., Krňoul, Z., Ronzhin, A., Karpov, A.: Audio-Visual Russian Speech Synthesis System. Informatics, Minsk, Belarus 20(4), 67–78 (2008)
Karpov, A., Ronzhin, A.: An Information Enquiry Kiosk with a Multimodal User Interface. In: 9th International Conference Pattern Recognition and Image Analysis (PRIA 2008), Nizhny Novgorod, Russia, pp. 265–268 (2008)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ronzhin, A.L., Budkov, V.Y. (2009). Multimodal Interaction with Intelligent Meeting Room Facilities from Inside and Outside. In: Balandin, S., Moltchanov, D., Koucheryavy, Y. (eds) Smart Spaces and Next Generation Wired/Wireless Networking. ruSMART NEW2AN 2009 2009. Lecture Notes in Computer Science, vol 5764. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04190-7_8
Download citation
DOI: https://doi.org/10.1007/978-3-642-04190-7_8
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-04188-4
Online ISBN: 978-3-642-04190-7
eBook Packages: Computer ScienceComputer Science (R0)