Natural Interaction in Intelligent Spaces: Designing for Architecture and Entertainment



Designing responsive environments for various venues has become trendy today. Museums wish to create attractive “hands-on” exhibits that can engage and interest their visitors. Several research groups are building an “aware home” that can assist elderly people or chronic patients to have an autonomous life, while still calling for or providing immediate assistance when needed.

The design of these smart spaces needs to respond to several criteria. Their main feature is to allow people to freely move in them. Whether they are navigating a 3D world or demanding assistance, we can’t strap users with encumbering sensors and limiting tethers to make them interact with the space. Natural interaction, based on people’s spontaneous gestures, movements and behaviors is an essential requirement of intelligent spaces. Capturing the user’s natural input and triggering a corresponding action is however, in many cases, not sufficient to ensure the appropriate response by the system. We need to be able to interpret the users’ actions in context and communicate to people information that is relevant to them, appropriate to the situation, and adequately articulated (simple or complex) at the right time.


Bayesian Network Gesture Recognition Hand Gesture Natural Interface Smart Space 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Albrecht DW, Zukerman I, Nicholson AE, Bud A (1997) Towards a bayesian model for keyhole plan recognition in large domains. In: Jameson A, Paris C, Tasso C (eds) Proceedings of the Sixth International Conference on User Modeling (UM ‘97). Springer, pp 365–376Google Scholar
  2. 2.
    Azarbayejani A, Pentland A (1996) Real-time self-calibrating stereo person tracking using 3-D shape estimation from blob features. In: Proceedings of the 13th ICPR. Vienna, AustriaGoogle Scholar
  3. 3.
    Azarbayejani A, Wren C, Pentland A (1996) Real-Time 3-D Tracking of the Human Body. In: Proceedings of IMAGE’COM 96, Bordeaux, France, MayGoogle Scholar
  4. 4.
    Brainard DH, Freeman WT (1997) Bayesian color constancy. J Opt Soc Am, A 14(7): 1393–1411 JulyCrossRefGoogle Scholar
  5. 5.
    Brand M, Oliver N, Pentland A (1997) Coupled hidden Markov models for complex action recognition. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Puerto Rico, pp 994–999Google Scholar
  6. 6.
    Brooks RA, Coen M, Dang D, DeBonet J, Kramer J, Lozano-Perez T, Mellor J, Pook P, Stauffer C, Stein L, Torrance M, Wessler M (1997) The intelligent room project. In: Proceedings of the Second International Cognitive Technology Conference (CT’97). Aizu, Japan, pp 271–279Google Scholar
  7. 7.
    Brumitt B, Meyers B, Krumm J, Kern A, Shafer S (2000) EasyLiving: Technologies for intelligent environments. In: Proceedings of Second International Symposium on Handheld and Ubiquitous Computing (HUC 2000), SeptemberGoogle Scholar
  8. 8.
    Campbell LW, Becker DA, Azarbayejani A, Bobick A, Pentland A (1996) Invariant features for 3-D gesture recognition. In: Proceedings of IEEE International Conference on Automatic Face and Gesture Recognition, Killington, Vermont, USAGoogle Scholar
  9. 9.
    Cohen M (1998) Design principles for intelligent environments. In: Proceedings of the Fifteenth National Conference on Artificial Intelligence (AAAI’98), Madison, WIGoogle Scholar
  10. 10.
    Conati C, Gertner A, VanLehn K, Druzdzel M (1997) On-line student modeling for coached problem solving using Bayesian networks. In: Proceedings of 6th International Conference on User Modeling (UM97), 1997. Chia Laguna, Sardinia, ItalyGoogle Scholar
  11. 11.
    Emiliani PL, Stephanidis C (2005) Universal access to ambient intelligence environments: opportunities and challenges for people with disabilities. IBM Syst J 44(3):605–619CrossRefGoogle Scholar
  12. 12.
    Hanssens N, Kulkarni A, Tuchinda R, Horton T (2005) Building agent-based intelligent workspaces. In: Proceedings of the 3rd International Conference on Internet Computing, pp 675–681Google Scholar
  13. 13.
    Heckerman D (1990) Probabilistic similarity networks. Technical Report, STAN-CS-1316, Depts. of Computer Science and Medicine, Stanford UniversityGoogle Scholar
  14. 14.
    Howard RA, Matheson JE (1981) Influence diagrams. In: Howard RA, Matheson JE (eds) Applications of decision analysis, volume 2. pp 721–762Google Scholar
  15. 15.
    Jameson A (1996) Numerical uncertainty management in user and student modeling: an overview of systems and issues. User Model User-Adapt Interact 5:193–251CrossRefGoogle Scholar
  16. 16.
    Jebara T, Pentland A (1998) Action reaction learning: analysis and synthesis of human behaviour. IEEE Workshop on The Interpretation of Visual Motion at the Conference on Computer Vision and Pattern Recognition, CVPR, JuneGoogle Scholar
  17. 17.
    Jensen FV (1996) An Introduction to Bayesian Networks. UCL PressGoogle Scholar
  18. 18.
    Jensen FV (2001) Bayesian networks and decision graphs. Springer-Verlag, New YorkMATHGoogle Scholar
  19. 19.
    Johanson B, Fox A, Winograd T (2002) The interactive workspaces project: experiences with ubiquitous computing rooms. IEEE Perv Comput Mag 1(2), April-JuneGoogle Scholar
  20. 20.
    Jojic N, Brumitt B, Meyers B, et al (2000) Detection and estimation of pointing gestures in dense disparity maps. In: Proceedings of Fourth IEEE International Conference on Automatic Face and Gesture RecognitionGoogle Scholar
  21. 21.
    Jordan MI (1999) (ed) Learning in graphical models. The MIT PressGoogle Scholar
  22. 22.
    Kidd C (1999) The aware home: a living laboratory for ubiquitous computing research. In: Proceedings of the Second International Workshop on Cooperative Buildings – CoBuild’99, OctoberGoogle Scholar
  23. 23.
    Koller D, Pfeffer A (1998) Probabilistic frame-based systems. In: Proceedings of the Fifteenth National Conference on Artificial Intelligence, Madison, Wisconsin, JulyGoogle Scholar
  24. 24.
    Krumm J, Shafer S, Wilson A (2001) How a smart environment can use perception. In: Workshop on Sensing and Perception for Ubiquitous Computing (part of UbiComp 2001), SeptemberGoogle Scholar
  25. 25.
    Nefian A, Liang L, Pi X, Liu X, Murphy K (2002) Dynamic Bayesian networks for audio-visual speech recognition. EURASIP, J Appl Signal Process 11:1–15Google Scholar
  26. 26.
    Pavlovic V, Rehg J, Cham TJ, Murphy K (1999) A dynamic Bayesian network approach to figure tracking using learned dynamic models. In: Proceedings of Int’l Conf. on Computer Vision (ICCV)Google Scholar
  27. 27.
    Pavlovic VI, Sharma R, Huang TS (1997) Visual interpretation of hand gestures for human-computer interaction: a review. IEEE Trans Pattern Anal Mach Intell, PAMI 19(7):677–695CrossRefGoogle Scholar
  28. 28.
    Pearl J (1988) Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann, San Mateo, CAGoogle Scholar
  29. 29.
    Pentland A (1998) Smart room, smart clothes. In: Proceedings of the Fourteenth International Conference On Pattern Recognition, ICPR’98, Brisbane, Australia, August 16–20Google Scholar
  30. 30.
    Pynadath DV, Wellman MP (1995) Accounting for context in plan recognition, with application to traffic monitoring. In: Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence, UAI, Morgan Kaufmann, San Francisco, 1995, pp 472–481Google Scholar
  31. 31.
    Rabiner LR, Juang BH (1986) An introduction to hidden Markov Models. IEEE ASSP Mag pp 4–15, JanuaryGoogle Scholar
  32. 32.
    Smyth P (1998) Belief networks, hidden Markov models, and Markov random fields: a unifying view. Pattern Recogn LettGoogle Scholar
  33. 33.
    Sparacino F (2001) (Some) computer vision based interfaces for interactive art and entertainment installations. In: INTER_FACE Body Boundaries, issue editor: Emanuele Quinz, Anomalie n. 2, Paris, France, AnomosGoogle Scholar
  34. 34.
    Sparacino F (2002a) The Museum Wearable: real-time sensor-driven understanding of visitors’ interests for personalized visually-augmented museum experiences. In: Proceedings of Museums and the Web (MW2002), April 17–20, BostonGoogle Scholar
  35. 35.
    Sparacino F (2003) Sto(ry)chastics: a Bayesian Network Architecture for User Modeling and Computational Storytelling for Interactive Spaces. In: Proceedings of Ubicomp, The Fifth International Conference on Ubiquitous Computing, Seattle, WA, USAGoogle Scholar
  36. 36.
    Sparacino F (2004) Museum Intelligence: Using Interactive Technologies For Effective Communication And Storytelling In The Puccini Set Designer Exhibit.” In: Proceedings of ICHIM 2004, Berlin, Germany, August 31-September 2ndGoogle Scholar
  37. 37.
    Sparacino F, Davenport G, Pentland A (2000) Media in performance: Interactive spaces for dance, theater, circus, and museum exhibits. IBM Syst J 39(3 & 4):479–510 Issue Order No. G321-0139Google Scholar
  38. 38.
    Sparacino F, Larson K, MacNeil R, Davenport G, Pentland A (1999) Technologies and methods for interactive exhibit design: from wireless object and body tracking to wearable computers. In: Proceedings of International Conference on Hypertext and Interactive Museums, ICHIM 99. Washington, DC, Sept. 22–26Google Scholar
  39. 39.
    Sparacino F, Pentland A, Davenport G, Hlavac M, Obelnicki M (1997) City of news. Proceedings of the: Ars Electronica Festival, Linz, Austria, 8–13 SeptemberGoogle Scholar
  40. 40.
    Sparacino F, Wren C, Azarbayejani A, Pentland A (2002b) Browsing 3-D spaces with 3-D vision: body-driven navigation through the Internet city. In: Proceedings of 3DPVT: 1st International Symposium on 3D Data Processing Visualization and Transmission, Padova, Italy, June 19–21Google Scholar
  41. 41.
    Sparacino F, Wren CR, Pentland A, Davenport G (1995) HyperPlex: a World of 3D interactive digital movies. In: IJCAI’95 Workshop on Entertainment and AI/Alife, Montreal, AugustGoogle Scholar
  42. 42.
    Starner T, Pentland A (1995) Visual Recognition of American Sign Language Using Hidden Markov Models. In: Proc. of International Workshop on Automatic Face and Gesture Recognition (IWAFGR 95). Zurich, SwitzerlandGoogle Scholar
  43. 43.
    Starner T, Mann S, Rhodes B, Levine J, Healey J, Kirsch D, Picard R, Pentland A (1997) Augmented reality through wearable computing. Presence 6(4):386–398 AugustGoogle Scholar
  44. 44.
    Wren C, Basu S, Sparacino F, Pentland A (1999) Combining audio and video in perceptive spaces. In: Managing Interactions in Smart Environments (MANSE 99), Trinity College Dublin, Ireland, December 13–14Google Scholar
  45. 45.
    Wren CR, Sparacino F et al (1996) Perceptive spaces for performance and entertainment: untethered interaction using computer vision and audition. Appl Artif Intell (AAI) J, JuneGoogle Scholar
  46. 46.
    Wren C, Azarbayejani A, Darrell T, Pentland A (1997) Pfinder: real-time tracking of the human body. IEEE Trans Pattern Anal Mach Intell PAMI 19(7):780–785CrossRefGoogle Scholar
  47. 47.
    Wu Y, Huang TS (2001) Human hand modeling, analysis and animation in the context of human computer interaction. In: IEEE Signal Processing Magazine, Special Issue on Immersive Interactive Technology, MayGoogle Scholar
  48. 48.
    Young SJ, Woodland PC, Byme WJ (1993) HTK: Hidden Markov model toolkit. V1.5. Entropic Research Laboratories IncGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  1. 1.Sensing Places and MITSanta MonicaUSA

Personalised recommendations