Human Sign Recognition for Robot Manipulation

  • Leonardo Saldivar-Piñon
  • Mario I. Chacon-Murguia
  • Rafael Sandoval-Rodriguez
  • Javier Vega-Pineda
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7329)

Abstract

This paper addresses the problem of recognizing signs generated by a person to guide a robot. The proposed method is based on video color analysis of a moving person making signs. The analysis consists of segmentation of the middle body, arm and forearm location and recognition of the arm and forearm positions. The proposed method was experimentally tested on videos with different target colors and illumination conditions. Quantitative evaluations indicate 97.76% of correct detection of the signs in 1807 frames.

Keywords

sign recognition robot manipulation video segmentation 

References

  1. 1.
    Feil-Seifer, D.J., Matarić, M.: Human-Robot Interaction. In: Encyclopedia of Complexity and Systems Science, pp. 4643–4659. Springer, New York (2009)Google Scholar
  2. 2.
    Khamis, A.M.: Interacción Remota Con Robots Móviles Basada En Internet, Universidad Carlos III de Madrid, Madrid, España, Doctoral Thesis (2003)Google Scholar
  3. 3.
    Roger, C.: Asimov’s Laws of Robotics: Implications for Information Technology Part 1. IEEE Computer Society 22(12), 53–61 (1993)Google Scholar
  4. 4.
    Draper, V.: Environmental Restoration and Waste Management Program Teleoperator Hand Controllers: Contextual Human Factors Assessment, OAK Ridge National Laboratory, Departamento de Energia de los Estados Unidos, Reporte (1994)Google Scholar
  5. 5.
    Hee-Deok, Y., A-Yeon, P., Seong-Whan, L.: Gesture Spotting and Recognition for Human-Robot Interaction. IEEE Transaction on Robotics 23(2), 256–270 (2003)Google Scholar
  6. 6.
    Khan, I.R., Miyamoto, H.: Face and Arm-Posture Recognition for Secure Human-Machine Interaction. In: Systems, Man and Cybernetics, International Conference, pp. 411–417 (2008)Google Scholar
  7. 7.
    Salti, S., Schreer, O., Stefano, D.: Real-time 3D Arm Pose Estimation from Monocular Video for Enhanced HCI. In: Proceeding of the 1st ACM Workshop on Vision Networks for Behavior Analysis, Vancouver, Canada, pp. 1–8 (2008)Google Scholar
  8. 8.
    Siddiqui, M., Medioni, G.: Robust Real-Time Upper Body Limb Detection and Tracking. In: 4th ACM International Workshop on Video Surveillance & Sensor Networks, Santa Barbara, California, USA (2006)Google Scholar
  9. 9.
    Patrick, H., Mayank, B.: 3D Model Based Gesture Acquisition Using a Single Camera. In: Proceedings of the Sixth IEEE Workshop on Applications of Computer Vision, Orlando, Florida, pp. 158–164 (2002)Google Scholar
  10. 10.
    Shaker, S., Saade, J., Asmar, D.: Fuzzy Inference-Based Person-Following Robot. International Journal of Systems Applications, Engineering and Development 2(1), 29–34 (2008)Google Scholar
  11. 11.
    Chen, H., Chen, T., Chen, Y., Lee, S.: Human Action Recognition Using Star Skeleton. In: 4th ACM International Workshop on Video Surveillance & Sensor Networks, Santa Barbara, California, USA (2006)Google Scholar
  12. 12.
    Tarokh, M., Kuo, J.: Vision Based Person Tracking and Following in Unstructured Environments. Department of Computer Science, San Diego State University, San Diego, California, U.S.A.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Leonardo Saldivar-Piñon
    • 1
  • Mario I. Chacon-Murguia
    • 1
  • Rafael Sandoval-Rodriguez
    • 1
  • Javier Vega-Pineda
    • 1
  1. 1.Visual Perception Applications on Robotic LabChihuahua Institute of TechnologyMexico

Personalised recommendations