Advertisement

Learning Gestures for Customizable Human-Computer Interaction in the Operating Room

  • Loren Arthur Schwarz
  • Ali Bigdelou
  • Nassir Navab
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6891)

Abstract

Interaction with computer-based medical devices in the operating room is often challenging for surgeons due to sterility requirements and the complexity of interventional procedures. Typical solutions, such as delegating the interaction task to an assistant, can be inefficient. We propose a method for gesture-based interaction in the operating room that surgeons can customize to personal requirements and interventional workflow. Given training examples for each desired gesture, our system learns low-dimensional manifold models that enable recognizing gestures and tracking particular poses for fine-grained control. By capturing the surgeon’s movements with a few wireless body-worn inertial sensors, we avoid issues of camera-based systems, such as sensitivity to illumination and occlusions. Using a component-based framework implementation, our method can easily be connected to different medical devices. Our experiments show that the approach is able to robustly recognize learned gestures and to distinguish these from other movements.

Keywords

Target System Gesture Recognition Inertial Sensor Manifold Model Gesture Recognition Method 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Johnson, R., O’Hara, K., Sellen, A., Cousins, C., Criminisi, A.: Exploring the potential for touchless interaction in image-guided interventional radiology. In: ACM Conference on Human Factors in Computing Systems, pp. 1–10 (January 2011)Google Scholar
  2. 2.
    Graetzel, C., Fong, T., Grange, S., Baur, C.: A non-contact mouse for surgeon-computer interaction. Technology and Health Care 12(3), 245–257 (2004)Google Scholar
  3. 3.
    Kipshagen, T., Graw, M., Tronnier, V., Bonsanto, M., Hofmann, U.: Touch-and marker-free interaction with medical software. In: World Congress on Medical Physics and Biomedical Engineering 2009, pp. 75–78 (2009)Google Scholar
  4. 4.
    Soutschek, S., Penne, J., Hornegger, J., Kornhuber, J.: 3-d gesture-based scene navigation in medical imaging applications using time-of-flight cameras. In: Computer Vision and Pattern Recognition Workshops (April 2008)Google Scholar
  5. 5.
    Wachs, J.P., Stern, H., Edan, Y., Gillam, M., Feied, C., Smith, M., Handler, J.: A real-time hand gesture interface for medical visualization applications. Applications of Soft Computing, 153–162 (2006)Google Scholar
  6. 6.
    Guerin, K., Vagvolgyi, B., Deguet, A., Chen, C., Yuh, D., Kumar, R.: ReachIN: A modular vision based interface for teleoperation. In: SACAI Workshop (2010)Google Scholar
  7. 7.
    Liu, J., Zhong, L., Wickramasuriya, J., Vasudevan, V.: uWave: Accelerometer-based personalized gesture recognition and its applications. Pervasive and Mobile Computing 5(6), 657–675 (2009)CrossRefGoogle Scholar
  8. 8.
    Schwarz, L.A., Mateus, D., Navab, N.: Multiple-activity human body tracking in unconstrained environments. In: Perales, F.J., Fisher, R.B. (eds.) AMDO 2010. LNCS, vol. 6169, pp. 192–202. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  9. 9.
    Hartmann, B., Link, N.: Gesture recognition with inertial sensors and optimized DTW prototypes. In: IEEE Conference on Systems Man and Cybernetics (2010)Google Scholar
  10. 10.
    Kela, J., Korpipää, P., Mäntyjärvi, J., Kallio, S., Savino, G., Jozzo, L., Marca, S.: Accelerometer-based gesture control for a design environment. Pers Ubiquit Comput. 10(5), 285–299 (2006)CrossRefGoogle Scholar
  11. 11.
    Elgammal, A., Lee, C.S.: The role of manifold learning in human motion analysis. In: Rosenhahn, B., Klette, R., Metaxas, D. (eds.) Human Motion. Computational Imaging and Vision, vol. 36, pp. 25–56. Springer, Netherlands (2008)Google Scholar
  12. 12.
    Jaeggli, T., Koller-Meier, E., Gool, L.V.: Learning generative models for multi-activity body pose estimation. International Journal of Computer Vision 83(2), 121–134 (2009)CrossRefGoogle Scholar
  13. 13.
    Belkin, M., Niyogi, P.: Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation 15(6), 1373–1396 (2003)CrossRefzbMATHGoogle Scholar
  14. 14.
    Isard, M., Blake, A.: Condensation—conditional density propagation for visual tracking. International Journal of Computer Vision 29(1), 5–28 (1998)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Loren Arthur Schwarz
    • 1
  • Ali Bigdelou
    • 1
  • Nassir Navab
    • 1
  1. 1.Computer Aided Medical ProceduresTechnische Universität MünchenGermany

Personalised recommendations