Multimedia Tools and Applications

, Volume 38, Issue 3, pp 385–405 | Cite as

Natural interaction on tabletops

  • Stefano BaraldiEmail author
  • Alberto Del Bimbo
  • Lea Landucci


We present two different Computer Vision based systems that enable multiple users to concurrently manipulate graphic objects presented over tabletop displays. The two solutions have different hardware layouts and use two different algorithms for gesture analysis and recognition. The first one is a media-handling application that can be used by co-located and remote users. The second is a knowledge-building application where users can manipulate the contents of a wiki as a visual concept map. The performance of both systems is evaluated and expounded. A conceptual framework is introduced, providing the fundamental guidelines for the design of natural interaction languages on tabletops.


HCI Natural interaction TableTop interaction Wiki Multimedia interface 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Aifanti S, Strintzis N (2002) A gesture recognition system using 3D data. Thessaloniki University, GreeceGoogle Scholar
  2. 2.
    Apperley M, Dahlberg B, Jeffries A, Paine L, Phillips M, Rogers B (2001) Development and application of large interactive display surfaces. In: Symposium on Human Computer Interaction Palmerston North, New ZealandGoogle Scholar
  3. 3.
    Bèrard C (2001) Bare-hand human–computer interaction. Technische Universitt, Berlin, GermanyGoogle Scholar
  4. 4.
    Cadoz C (1994) Les realites virtuelles. Dominos, FlammarionGoogle Scholar
  5. 5.
    Dietz P, Leigh D (2001) DiamondTouch: a multi-user touch technology. In: Proceedings of the 14th Annual ACM Symposium on User Interface Software and Technology. UIST’01. ACM Press, New York, NY, pp 219–226Google Scholar
  6. 6.
    Frigola M, Fernandez J, Aranda J (2003) Visual human machine interface by gestures. Universitat Politecnica de Catalunya, Barcelona, SpainGoogle Scholar
  7. 7.
    GTCO CalComp (2007) Interwrite Meeting Suite. Available at:
  8. 8.
    Isaacs J, Foo S (2004) Hand pose estimation for American sign language recognition. FAMU—FSU College of Engeneering, Tallahassee, FLGoogle Scholar
  9. 9.
    Kapuscinski T, Wysocki M (2001) Hand gesture recognition for man–machine interaction. Rzeszow University, Rzeszow, PolandGoogle Scholar
  10. 10.
    Leibe B, Starner T, Ribarsky W, Wartell Z, Kru D, Singletary B, Hidges L (2000) The perceptive workbench: toward spontaneous and natural interaction in semi-immersive virtual environments. GVU Center, Georgia Institute of Technology, AtlantaGoogle Scholar
  11. 11.
    Letessier J, Bérard F (2004) Visual tracking of bare fingers for interactive surfaces. In: Proceedings of the 17th annual ACM symposium on User interface software and technology (UIST’04)Google Scholar
  12. 12.
    Marsic I, Medl A, Flanagan J (2002) Natural communication with information systems. Rutgers University, Piscataway, NJ (Aug)Google Scholar
  13. 13.
    MeetingWorks. Available at:
  14. 14.
    Mulder A (1996) Hand gestures for HCI—research on human movement behaviour reviewed in the context of hand centred input. School of Kinesiology, Simon Fraser University, Canada (February)Google Scholar
  15. 15.
    Mulder A (1996) Hand festures for HCI. Tech report 96-I. School of Kinesiology, Simon Fraser University, CanadaGoogle Scholar
  16. 16.
    Norman DA (1988) Psychology of everyday things. Basic, New York, NYGoogle Scholar
  17. 17.
    Novak JD (1998) Learning, creating, and using knowledge: concept maps as facilitative tools in schools and corporations. Erlbaum, Mahweh, NJGoogle Scholar
  18. 18.
    O’Hagan R, Zelinsky A (2000) Visual gesture interfaces for virtual environments. Australian Nat. University, ACT, AustraliaGoogle Scholar
  19. 19.
    Otsu N (1979) A thresholding selection method for grey-level histograms. IEEE Trans Syst Man Cybern SMC-9:62–66Google Scholar
  20. 20.
    Rekimoto J (2002) SmartSkin: an infrastructure for freehand manipulation on interactive surfaces. In: Proceedings of CHI’02 Conference on human factors in computing systems: changing our world, changing ourselves, ACM, pp 113–120Google Scholar
  21. 21.
    Ryall K, Ringel Morris M., Everitt K, Forlines C, Shen C (2006) Experiences with and observations of direct-touch tables. IEEE International Workshop on Horizontal Interactive Human–Computer Systems (TableTop)Google Scholar
  22. 22.
    Scott SD, Grant KD, Mandryk RL (2003) System guidelines for co-located, collaborative work on a tabletop display. In: Proceedings of ECSCW’03, European Conference Computer-Supported Cooperative Work, Helsinki, Finland, September 14–18Google Scholar
  23. 23.
    Shen C, Vernier F, Forlines C, Ringel M (2004) DiamondSpin: an extensible toolkit for around-the-table interaction. In: Proceedings of CHI’04, ACM Press, p 167–174Google Scholar
  24. 24.
    Smart Technologies. Smart Board. Avilable at:
  25. 25.
    Starner T, Auxier J, Ashbrook D, Gandy M (2000) The gesture pendant: a self-illuminating, wearable, infrared computer vision system for home automation control and medical monitoring. College of Computing, Georgia Institute of Technology, Atlanta, GAGoogle Scholar
  26. 26.
    Stefik M, Bobrow DG, Foster G, Lanning S, Tatar D (1987) WYSIWIS revised: Early experiences with multiuser interfaces. In: Proceedings of ACM Transactions on Office Information SystemsGoogle Scholar
  27. 27.
    Tse E, Greenberg S (2004) Rapidly prototyping single display groupware through the SDGToolkit. In: Proceedings of the Fifth Australasian User Interface Conference, Australian Computer Society, p 101–110Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2008

Authors and Affiliations

  • Stefano Baraldi
    • 1
    Email author
  • Alberto Del Bimbo
    • 1
  • Lea Landucci
    • 1
  1. 1.Media Integration and Communication CenterUniversità degli studi di FirenzeFlorenceItaly

Personalised recommendations