Advertisement

Gestures for Large Display Control

  • Wim Fikkert
  • Paul van der Vet
  • Gerrit van der Veer
  • Anton Nijholt
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5934)

Abstract

The hands are highly suited to interact with large public displays. It is, however, not apparent which gestures come naturally for easy and robust use of the interface. We first explored how uninstructed users gesture when asked to perform basic tasks. Our subjects gestured with great similarity and readily produced gestures they had seen before; not necessarily in a human-computer interface. In a second investigation these and other gestures were rated by a hundred subjects. A gesture set for explicit command-giving to large displays emerged from these ratings. It is notable that for a selection task, tapping the index finger in mid-air, like with a traditional mouse, scored highest by far. It seems that the mouse has become a metaphor in everyday life.

Keywords

Human-centered computing user interfaces input devices and strategies intuitive hand gestures large display interaction 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Oviatt, S., Cohen, P.: Perceptual user interfaces: multimodal interfaces that process what comes naturally. Communications of the ACM 43(3), 45–53 (2000)CrossRefGoogle Scholar
  2. 2.
    Fikkert, W., D’Ambros, M., Bierz, T., Jankun-Kelly, T.: Interacting with visualizations. In: Kerren, A., Ebert, A., Meyer, J. (eds.) GI-Dagstuhl Research Seminar 2007. LNCS, vol. 4417, pp. 77–162. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  3. 3.
    Wahlster, W.: SmartKom: Foundations of Multimodal Dialogue Systems. Cognitive Technologies, vol. XVIII. Springer, Heidelberg (2006)Google Scholar
  4. 4.
    Nijholt, A., Reidsma, D., Poppe, R.: Games and entertainment in ambient intelligence environments. In: Aghajan, H., Delgado, R., Augusto, J.C. (eds.) Human-Centric Interfaces for Ambient Intelligence. Elsevier, Amsterdam (2009)Google Scholar
  5. 5.
    Hummels, C., Smets, G., Overbeeke, K.: An intuitive two-handed gestural interface for computer supported product design. In: Wachsmuth, I., Fröhlich, M. (eds.) GW 1997. LNCS (LNAI), vol. 1371, p. 197. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  6. 6.
    Vogel, D., Balakrishnan, R.: Distant freehand pointing and clicking on very large, high resolution displays. In: Proceedings of the 18th annual ACM symposium on User interface software and technology (UIST 2005), pp. 33–42. ACM Press, New York (2005)CrossRefGoogle Scholar
  7. 7.
    Wexelblat, A.: Research challenges in gesture: Open issues and unsolved problems. In: Wachsmuth, I., Fröhlich, M. (eds.) GW 1997. LNCS (LNAI), vol. 1371, pp. 1–11. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  8. 8.
    Buxton, W.: A three-state model of graphical input. In: Proceedings of the IFIP TC13 Third Interational Conference on Human-Computer Interaction (INTERACT 1990), pp. 449–456. North-Holland Publishing Co., Amsterdam (1990)Google Scholar
  9. 9.
    Bowman, D., Kruijff, E., LaViola, J., Poupyrev, I.: 3D User Interfaces: Theory and Practice. Addison Wesley Longman Publishing Co., Inc., Redwood City (2004)Google Scholar
  10. 10.
    Hinckley, K.: Input technologies and techniques. In: Sears, A., Jacko, J. (eds.) Handbook of Human-Computer Interaction: fundamentals, evolving technologies and emerging applications, pp. 151–168. Lawrence Erlbaum Associates Inc., Hillsdale (2006); Revision of 2002 chapter with lots of new materialGoogle Scholar
  11. 11.
    Ahlstroem, D., Alexandrowicz, R., Hitz, M.: Improving menu interaction: a comparison of standard, force enhanced and jumping menus. In: Proceedings of the SIGCHI conference on Human Factors in computing systems (CHI 2006), pp. 1067–1076. ACM, New York (2006)CrossRefGoogle Scholar
  12. 12.
    Guiard, Y.: Asymmetric division of labor in human skilled bimanual action: The kinematic chain as a model. Journal of Motor Behavior 19(4), 486–517 (1987); Slightly edited version of an article originally publishedGoogle Scholar
  13. 13.
    Kipp, M.: Gesture Generation by Imitation - From Human Behavior to Computer Character Animation. PhD thesis, Saarland University, Saarbruecken, Germany, Boca Raton, Florida (December 2004)Google Scholar
  14. 14.
    McNeill, D.: Hand and mind: What gestures reveal about thought. University of Chicago Press, Chicago (1992)Google Scholar
  15. 15.
    Wobbrock, J., Morris, M., Wilson, A.: User-defined gestures for surface computing. In: Proceedings of the 27th international conference on Human factors in computing systems (CHI 2009), pp. 1083–1092. ACM, New York (2009)CrossRefGoogle Scholar
  16. 16.
    Bolt, R.: “put-that-there”: Voice and gesture at the graphics interface. SIGGRAPH Computer Graphics 14(3), 262–270 (1980)CrossRefMathSciNetGoogle Scholar
  17. 17.
    Grossman, T., Wigdor, D., Balakrishnan, R.: Multi-finger gestural interaction with 3d volumetric displays. In: Proceedings of the 17th annual ACM symposium on User interface software and technology (UIST 2004), pp. 61–70. ACM Press, New York (2004)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Wim Fikkert
    • 1
  • Paul van der Vet
    • 1
  • Gerrit van der Veer
    • 1
  • Anton Nijholt
    • 1
  1. 1.Human Media InteractionUniversity of TwenteEnschedeThe Netherlands

Personalised recommendations