Advertisement

mCube – Towards a Versatile Gesture Input Device for Ubiquitous Computing Environments

  • Doo Young Kwon
  • Stephan Würmlin
  • Markus Gross
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4836)

Abstract

We propose a novel versatile gesture input device called the mCube to support both desktop and hand-held interactions in ubiquitous computing environments. It allows for desktop interactions by moving the device on a planar surface, like a computer mouse. By lifting the device from the surface, users can seamlessly continue handheld interactions in the same application. Since mCube is a single completely wireless device, it can be carried and used for different display platforms. We explore the use of multiple sensors to support a wide range of tasks namely gesture commands, multi-dimensional manipulation and navigation, and tool selections on a pie-menu. This paper addresses the design and implementation of the device with a set of design principles, and demonstrates its exploratory interaction techniques. We also discuss the results of a user evaluation and future directions.

Keywords

Gesture Recognition Virtual Object Input Device Interaction Technique Robust Tracking 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
  2. 2.
    Brashear, H., Starner, T., Lukowicz, P., Junker, H.: Using multiple sensors for mobile sign language recognition. In: Fensel, D., Sycara, K.P., Mylopoulos, J. (eds.) ISWC 2003. LNCS, vol. 2870, Springer, Heidelberg (2003)Google Scholar
  3. 3.
    Brewster, S., Lumsden, J., Hall, M.B.M., Tasker, S.: Multimodal ’eyes-free’ interaction techniques for wearable devices. In: Proceedings of CHI 2003, pp. 473–480 (2003)Google Scholar
  4. 4.
    Campbell, L.W., Becker, D.A.: Invariant features for 3-d gesture recognition. In: Second International Workshop on Face and Gesture Recognition (1996)Google Scholar
  5. 5.
    Cao, X., Balakrishnan, R.: Visionwand: interaction techniques for large displays using a passive wand tracked in 3d. In: Proceedings of UIST 2003, pp. 173–182 (2003)Google Scholar
  6. 6.
    Crossbow Technology, http://www.xbow.com/
  7. 7.
    Epps, J., Lichman, S., Wu, M.: A study of hand shape use in tabletop gesture interaction. In: CHI 2006 extended abstracts, pp. 748–753. ACM Press, New York (2006)CrossRefGoogle Scholar
  8. 8.
    Frohlich, B., Plate, J.: The cubic mouse: a new device for three-dimensional input. In: Proceedings of CHI 2000, pp. 526–531. ACM Press, New York (2000)Google Scholar
  9. 9.
    Quill Jr., L.: a gesture design tool for penbased user interfaces (2001)Google Scholar
  10. 10.
    Kionix: Handheld Electronic Compass Applications Using the Kionix KXM52 MEMS Tri-axis Accelerometer, http://kionix.com/App-Notes/app-notes.htm
  11. 11.
    Laerhoven, K.V., Villar, N., Schmidt, A., Gellersen, G.K.H.: Using an autonomous cube for basic navigation and input. In: Proceedings of ICMI 2003, pp. 203–210 (2003)Google Scholar
  12. 12.
    Malik, S., Laszlo, J.: Visual touchpad: a two-handed gestural input device. In: Sharma, R., Darrell, T., Harper, M.P., Lazzari, G., Turk, M. (eds.) ICMI, pp. 289–296. ACM, New York (2004)CrossRefGoogle Scholar
  13. 13.
    OpenSource Computer Vision Library. Intel Corp., http://www.intel.com
  14. 14.
    Payne, J., Keir, P., Elgoyhen, J., McLundie, M., Naef, M., Horner, M., Anderson, P.: Gameplay issues in the design of spatial 3d gestures for video games. In: Extended abstracts in CHI 2006, pp. 1217–1222 (2006)Google Scholar
  15. 15.
    Rekimoto, J., Sciammarella, E.: Toolstone: Effective use of the physical manipulation vocabularies of input devices. In: Proceedings of UIST 2000, pp. 109–117 (2000)Google Scholar
  16. 16.
    Sheridan, J.G., Short, B.W., Van Laerhoven, K., Villar, N., Kortuem, G.: Exploring cube affordance. In: Proceedings of Eurowearables 2003 (2003)Google Scholar
  17. 17.
    Weiser, M.: The world is not a desktop. Interactions 1(1), 7–8 (1994)CrossRefGoogle Scholar
  18. 18.
    Westeyn, T., Brashear, H., Atrash, A., Starner, T.: Georgia tech gesture toolkit: Supporting experiments in gesture recognition. In: Proceedings of ICMI 2003 (2003)Google Scholar
  19. 19.
    Wilson, A., Shafer, S.: Xwand: Ui for intelligent spaces. In: Proceedings of ACM CHI Conference on Human Factors in Computing Systems, pp. 522–545. ACM Press, New York (2003)Google Scholar
  20. 20.
    Zhai, S.: User performance in relation to 3d input device design 32, 50–54 (1998)Google Scholar
  21. 21.
    Zhang, Z.: Flexible camera calibration by viewing a plane from unknown orientations. In: Proceedings of Computer Vision 1999, pp. 662–673 (1999)Google Scholar
  22. 22.
    Zimmerman, T.G., Lanier, J., Blanchard, C., Bryson, S., Harvill, Y.: A hand gesture interface device. In: Proceedings of CHI 1987, pp. 189–192 (1987)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Doo Young Kwon
    • 1
  • Stephan Würmlin
    • 1
  • Markus Gross
    • 1
  1. 1.Computer Graphics Laboratory, ETH Zurich, 8092 ZurichSwitzerland

Personalised recommendations