Abstract
This paper presents an intelligent assistive robotic system for people suffering from myopathy. In this context, we are developing a 4 DoF assistive exoskeletal orthosis for the upper limb. A special attention is made toward Human Machine Interaction (HMI). We propose the use of visual sensing as interface able to convert user head gesture and mouth expression into a suitable control command. In that way, a non-intrusive cameras control is particularly adapted to disabled people. Moreover, we propose to robustify the command with a visual context analysis component.
In this paper, we will first describe the problematic and the designed mechanical system. Next, we will describe the two approaches developed for visual sensing interface: head control and mouth expression control. Finally, we introduce the context detection for scene understanding.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
DeMenthon, D., Davis, L.S.: Model-based object pose in 25 lines of code. In: European Conference on Computer Vision, pp. 335–343 (1992)
Coulon, A., Eveno, P.-Y., Caplier, N.: Accurate and quasi-automatic lip tracking. In: Circuits and Systems for Video Technology, pp. 706–715 (2004)
Heikkila, J., Silven, O.: A real-time system for monitoring of cyclists and pedestrians. In: Second IEEE Workshop on Visual Surveillance Fort Collins, pp. 74–81
Gu, H., Su, G., Du, C.: Feature points extraction from face. Image and Vision Computing, 154–158 (2003)
Jaffe, D.L.: An ultrasonic head position interface for wheelchair control. Journal of Medical Systems 6(4), 337–342 (1982)
Sheredos, S.J., Ford, J.M.: Ultrasonic head controller for powered wheelchairs. Journal of Rehabilitation Research and Development 32, 280–294 (1995)
Bowyer, K.W., Chang, K., Flynn, P.: A survey of approaches and challenges in 3d and multi-modal 3d + 2d face recognition. Computer Vision and Image Understanding 101(1), 1–15 (2006)
Lyons, M.J., Haehnel, M., Tetsutani, N.: Designing, playing, and performing with a vision-based mouth interface. New Interfaces For Musical Expression, 116–121 (2003)
Pantic, M., Tomc, M., Rothkrantz, L.: A hybrid approach to mouth features detection. In: IEEE International Conference on Systems, Man, and Cybernetics, pp. 1188–1193 (2001)
Wark, T., Sridharan, S.: A syntactic approach to automatic lip feature extraction forspeaker identification. IEEE International Conference on Acoustics, Speech and Signal Processing 6, 3693–3696 (1998)
Morency, L.-P., Darrell, T.: Head gesture recognition in intelligent interfaces: The role of context in improving recognition. In: 11th international conference on Intelligent user interfaces
Viola, P., Jones, M.J.: Robust real-time face detection. Int. J. Comput. Vision 57(2), 137–154 (2004)
Chen, W.-L., Chen, Y.-L., Chen, S.-C., Lin, J.-F.: A head orientated wheelchair for people with disabilities. Disability and Rehabilitation 25(6), 249–253 (2003)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Baklouti, M., Monacelli, E., Guitteny, V., Couvet, S. (2008). Intelligent Assistive Exoskeleton with Vision Based Interface. In: Helal, S., Mitra, S., Wong, J., Chang, C.K., Mokhtari, M. (eds) Smart Homes and Health Telematics. ICOST 2008. Lecture Notes in Computer Science, vol 5120. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-69916-3_15
Download citation
DOI: https://doi.org/10.1007/978-3-540-69916-3_15
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-69914-9
Online ISBN: 978-3-540-69916-3
eBook Packages: Computer ScienceComputer Science (R0)