Advertisement

Developing Intuitive Gestures for Spatial Interaction with Large Public Displays

  • Yubo KouEmail author
  • Yong Ming Kow
  • Kelvin Cheng
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9189)

Abstract

Freehand gestures used in gestural-based interactive systems are often designed around technical limitations of gesture capturing technologies, resulting in gestures that may not be intuitive to users. In this paper, we investigated freehand gestures that are intuitive to users with common technical knowledge. We conducted a gesture solicitation study with 30 participants, who were asked to complete 21 tasks on a large display using freehand gestures. All gestures in the study were video-recorded. We conducted in-depth interviews with each participant to ask about the gestures they had chosen and why they had chosen them. We found that a large proportion of intuitive freehand gestures had metaphoric origins from daily uses of two-dimensional surface displays, such as smart phones and tablets. However, participants may develop new gestures, particularly when objects they are manipulating deviated from those commonly seen in surface technologies. In this paper, we discuss when and why participants developed new gestures rather than reusing gestures of similar tasks on two-dimension surface displays. We suggest design implications for gestures for large public displays.

Keywords

Large displays Gestures Freehand Spatial interaction 

Notes

Acknowledgement

This research is supported by the National Research Foundation, Prime Minister’s Office, Singapore under its International Research Centre @ Singapore Funding Initiative and administered by the Interactive and Digital Media Programme Office.

References

  1. 1.
    Williamson, J., Murray-Smith, R.: Rewarding the original: explorations in joint user-sensor motion spaces. In: Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems - CHI 2012, pp. 1717–1726. ACM Press (2012)Google Scholar
  2. 2.
    Lee, J.C.: In search of a natural gesture XRDS Crossroads. ACM Mag. Students 16, 9 (2010)Google Scholar
  3. 3.
    Grandhi, S.A., Joue, G., Mittelberg, I.: Understanding naturalness and intuitiveness in gesture production. In: Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems - CHI 2011, pp. 821–824. ACM Press, New York (2011)Google Scholar
  4. 4.
    Wobbrock, J.O., Morris, M.R., Wilson, A.D.: User-defined gestures for surface computing. In: Proceedings of the 27th International Conference on Human Factors in Computing Systems - CHI 2009, pp. 1083–1092. ACM Press, New York (2009)Google Scholar
  5. 5.
    Nancel, M., Wagner, J., Pietriga, E., Chapuis, O., Mackay, W.: Mid-air pan-and-zoom on wall-sized displays. In: Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems - CHI 2011, pp. 177–186. ACM Press, New York (2011)Google Scholar
  6. 6.
    Malik, S., Ranjan, A., Balakrishnan, R.: Interacting with large displays from a distance with vision-tracked multi-finger gestural input. In: Proceedings of the 18th Annual ACM Symposium on User Interface Software and Technology - UIST 2005, pp. 43–52. ACM Press, New York (2005)Google Scholar
  7. 7.
    Vogel, D., Balakrishnan, R.: Distant freehand pointing and clicking on very large, high resolution displays. In: Proceedings of the 18th Annual ACM Symposium on User Interface Software and Technology - UIST 2005, pp. 33–42. ACM Press, New York (2005)Google Scholar
  8. 8.
    Fikkert, W., van der Vet, P., Nijholt, A.: Gestures in an intelligent user interface. In: Shao, L., Shan, C., Luo, J., Etoh, M. (eds.) Multimedia Interaction and Intelligent User Interfaces, pp. 215–242. Springer, London (2010)CrossRefGoogle Scholar
  9. 9.
    Wobbrock, J.O., Aung, H.H., Rothrock, B., Myers, B.A.: Maximizing the guessability of symbolic input. In: CHI 2005 Extended Abstracts on Human Factors in Computing Systems - CHI 2005, pp. 1869–1872. ACM Press, New York (2005)Google Scholar
  10. 10.
    Morris, M.R.: Web on the wall: insights from a multimodal interaction elicitation study. In: Proceedings of the 2012 ACM International Conference on Interactive Tabletops and Surfaces - ITS 2012, pp. 95–104. ACM Press, New York (2012)Google Scholar
  11. 11.
    Kurdyukova, E., Redlin, M., André, E.: Studying user-defined iPad gestures for interaction in multi-display environment. In: Proceedings of the 2012 ACM International Conference on Intelligent User Interfaces - IUI 2012, pp. 93–96. ACM Press, New York (2012)Google Scholar
  12. 12.
    Ruiz, J., Li, Y., Lank, E.: User-defined motion gestures for mobile interaction. In: Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems - CHI 2011, pp. 197–206. ACM Press, New York (2011)Google Scholar
  13. 13.
    Fikkert, W., van der Vet, P., van der Veer, G., Nijholt, A.: Gestures for large display control. In: Kopp, S., Wachsmuth, I. (eds.) Gesture in Embodied Communication and Human-Computer Interaction, pp. 245–256. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  14. 14.
    Cheng, K., Pulo, K.: Direct interaction with large-scale display systems using infrared laser tracking devices. In: Proceedings of the Asia-Pacific Symposium on Information Visualization - APVis 2003, pp. 67–74. Australian Computer Society, Darlinghurst (2003)Google Scholar
  15. 15.
    Fikkert, W., van der Vet, P., Nijholt, A.: User-evaluated gestures for touchless interactions from a distance. In: 2010 IEEE International Symposium on Multimedia, pp. 153–160. IEEE (2010)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Department of InformaticsUniversity of CaliforniaIrvineUSA
  2. 2.School of Creative MediaCity University of Hong KongHong KongChina
  3. 3.Keio-NUS CUTE CenterNational University of SingaporeSingaporeSingapore

Personalised recommendations