Abstract
This paper describes a gestural guidance interface that controls the motion of a mobile platform using a set of predefined static and dynamic hand gestures inspired by the marshalling code. Images captured by an onboard color camera are processed at video rate in order to track the operator’s head and hands. The camera pan, tilt and zoom are adjusted by a fuzzy-logic controller so as to track the operator’s head and maintain it centered and properly sized within the image plane. Gestural commands are defined as two-hand motion patterns, whose features are provided, at video rate, to a trained neural network. A command is considered recognized once the classifier has produced a series of consistent interpretations. A motion-modifying command is then issued in a way that ensures motion coherence and smoothness. The guidance system can be trained online.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Albiol, A., Torres, L., Delp, E.J.: Optimum color spaces for skin detection. In: Proceedings 2001 International Conference on Image Processing 2001, October 7-10, vol. 1, pp. 122–124 (2001)
Bernier, O., Collobert, D.: Head and hands 3D tracking in real time by the EM algorithm. In: Proceedings of IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real- Time Systems 2001, July 13, pp. 75–81 (2001)
Cutler, R., Turk, M.: View-based interpretation of real-time optical flow for gesture recognition. In: Proceedings of Third IEEE International Conference on Automatic Face and Gesture Recognition 1998, April 14-16, pp. 416–421 (1998)
Ehreumann, M., Lutticke, T., Dillmann, R.: Dynamic gestures as an input device for directing a mobile platform. In: Proceedings 2001 ICRA of IEEE International Conference on Robotics and Automation 2001, vol. 3, pp. 2596–2601 (2001)
Hagan, M.T., Menhaj, M.: Training feedforward networks with the Marquardt algorithm. IEEE Transactions on Neural Networks 5(6), 989–993 (1994)
Ming-Hsuan, Y., Ahuja, N.: Extracting gestural motion trajectories. In: Proceedings of Third IEEE International Conference on Automatic Face and Gesture Recognition 1998, April 14-16, pp. 10–15 (1998)
Shimada, N., Kimura, K., Shirai, Y., Kuno, Y.: Hand posture estimation by combining 2-D appearance-based and 3-D model-based approaches. In: Proceedings of 15th International Conference on Pattern Recognition 2000, September 3-7, vol. 3, pp. 705–708 (2000)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Paquin, V., Cohen, P. (2004). A Vision-Based Gestural Guidance Interface for Mobile Robotic Platforms. In: Sebe, N., Lew, M., Huang, T.S. (eds) Computer Vision in Human-Computer Interaction. CVHCI 2004. Lecture Notes in Computer Science, vol 3058. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-24837-8_5
Download citation
DOI: https://doi.org/10.1007/978-3-540-24837-8_5
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-22012-1
Online ISBN: 978-3-540-24837-8
eBook Packages: Springer Book Archive