Advertisement

Vision-Based User Interface for Mouse and Multi-mouse System

  • Yuki Onodera
  • Yasushi Kambayashi
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8210)

Abstract

This paper proposes a vision-based methodology that recognizes the users’ fingertips so that the users can perform various mouse operations by gestures as well as implements multi-mouse operations. By using the Ramer-Douglas-Peucker algorithm, the system retrieves the coordinates of the finger from the palm of the hand. The system also recognizes the users’ intended operation on the mouse through the movements of recognized fingers. When the system recognizes several palms of hands, it changes its mode to the multi-mouse mode so that several users can coordinate their works on the same screen. The number of mice is the number of recognized palms. In order to implement our proposal, we have employed the Kinect motion capture camera and have used the tracking function of the Kinect to recognize the fingers of users. Operations on the mouse pointers are reflected in the coordinates of the detected fingers. In order to demonstrate the effectiveness of our proposal, we have conducted several user experiments. We have observed that the Kinect is suitable equipment to implement the multi-mouse operations. The users who participated in the experiments quickly learned the multi-mouse environment and performed naturally in front of the Kinect motion capture camera.

Keywords

Kinect Multi-mouse Hand Tracking Skelton Tracking OpenNI 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Farhadi-Niaki, F., Aghaei, R.G., Arya, A.: Empirical Study of a Vision-based Depth Sensitive Human-Computer Interaction System. In: Tenth Asia Pacific Conference in Computer Human Interaction, pp. 101–108. ACM Press (2012)Google Scholar
  2. 2.
    Ueda, M., Takeuchi, I.: Mouse Cursors Surf the Net - Developing Multi-computer Multi-mouse Systems. In: IPSJ Programming Symposium, pp. 25–32 (2007) (in Japanese)Google Scholar
  3. 3.
    Viola, P., Jones, M.: Robust real-time object detection. In: Second International Workshop on Statistical and Computational Theories of Vision (2001)Google Scholar
  4. 4.
    Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple feature. IEEE Computer Vision and Pattern Recognition 1, 511–518 (2001)Google Scholar
  5. 5.
    Chen, Q., Cordea, M.D., Petriu, E.M., Varkonyi-Kockzy, A.R., Whalen, T.E.: Human-computer interaction for smart environment applications using hand-gesture and facial-expressions. International Journal of Advanced Media and Communication 3(1/2), 95–109 (2009)CrossRefGoogle Scholar
  6. 6.
    Kolsch, M., Tuck, M.: Robust hand detection. In: International Confernce on Automatic Face and Gesture Recognition, pp. 614–619 (2004)Google Scholar
  7. 7.
    Kolsch, M., Tuck, M.: Analysis of rotational robustness of hand detection with a Viola-Jones detector. In: IAPR International Conference of Pattern Recognition, vol. 3, pp. 107–110 (2004)Google Scholar
  8. 8.
    Zhang, Q., Chen, F., Liu, X.: Hand gesture detection and segmentation based on difference background image with complex background. In: International Conference on Embedded Software and Systems, pp. 338–343 (2008)Google Scholar
  9. 9.
    Anton-Canalis, L., Sanchez-Nielsen, E., Castrillon-Santana, M.: Hand pose detection for vision-based gesture interfaces. In: Conference on Machine Vision Applications, pp. 506–509 (2005)Google Scholar
  10. 10.
    Marcel, S., Bernier, O., Viallet, J.E., Collobert, D.: Hand gesture recognition using input-output hidden Markov models. In: Conference on Automatic Face and Gesture Recognition, pp. 456–461 (2000)Google Scholar
  11. 11.
    Yu, C., Wang, X., Huang, H., Shen, J., Wu, K.: Vision-based hand gesture recognition using combinational features. In: Sixth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, pp. 543–546 (2010)Google Scholar
  12. 12.
    Chiba, S., Yosimitsu, K., Maruyama, M., Toyama, K., Iseki, H., Muragaki, Y.: Opect: Non-contact image processing system using Kinect. J. Japan Society of Computer Aided Surgery 14(3), 150–151 (2012) (in Japanese)Google Scholar
  13. 13.
    Nichi, Opect: Non-contact image processing system using Kinect, http://www.nichiiweb.jp/medical/category/hospital/opect.html
  14. 14.
    Ahn, S.C., Lee, T., Kim, I., Kwon, Y., Kim, H.: Computer vision-based interactive presentation system. In: Asian Conference for Computer Vision (2004)Google Scholar
  15. 15.
    Wagner, B.: Effective C# (Cover C# 4.0): 50 Specific Ways to Improve Your C#, 2nd edn. Addision-Wesley Professional (2010)Google Scholar
  16. 16.
    OpenNI: The standard framework for 3D sensing, http://www.openni.org/
  17. 17.
  18. 18.
    Douglas, D., Peucker, T.: Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. In: The Canadian Cartographer, pp. 112–122 (1973)Google Scholar
  19. 19.
    Kinect for Windows: Voice, Movement & Gesture Recognition Technology, http://www.microsoft.com/en-us/kinectforwindows/

Copyright information

© Springer International Publishing Switzerland 2013

Authors and Affiliations

  • Yuki Onodera
    • 1
  • Yasushi Kambayashi
    • 2
  1. 1.Department of Retail Service SystemsCube System Inc.Shinagawa-kuJapan
  2. 2.Department of Computer and Information EngineeringNippon Institute of TechnologyMiyashiro-machiJapan

Personalised recommendations