Advertisement

Visuomotor Architecture for High-Speed Robot Control

  • Koichi Hashimoto
  • Akio Namiki
  • Masatoshi Ishikawa
Chapter
  • 313 Downloads
Part of the Trends in Mathematics book series (TM)

Abstract

A hierarchical control architecture is proposed on the basis of an interaction model between efferent and afferent information in brain motor control. The model has five levels: motoneurons, premotor interneurons, pattern generator, parameter selection and action planning. The effectors including biophysical properties receive the commands from motoneurons. In the proposed architecture, the premotor interneurons and motoneurons are implemented as a servo module; the pattern generator corresponds to the motion planner; and the parameter selection is realized by adaptation module. The afferent information is the feedback signal and the efferent information corresponds to motion command and parameter adaptation. Grasping and handling of a dynamically moving object are implemented on a DSP network with a high-speed vision, a dextrous hand and a 7 DOF manipulator. The results show responsive and flexible actions that exhibit the effectiveness of the proposed hierarchical modular structure.

Keywords

Vision Control Robot Human brain architecture High-speed vision Grasping 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    P. K. Allen, B. Yoshimi, and A. Timucenko, “Real-time visual servoing,” inProc. IEEE Int. Conf. Robotics and Automation1991, pp. 2376–2384.Google Scholar
  2. 2.
    J. Dean, “The neuroethology of perception and action,” inRelationships Between Perception and ActionO. Neuman and W. Printz, Eds. Berlin: Springer-Verlag, 1990, pp. 81–131.CrossRefGoogle Scholar
  3. 3.
    K. Hashimoto, Ed.Visual Servoing — Real-Time Control of Robot Manipulators Based on Visual Sensory Feedback.World Scientific, Singapore, 1993.Google Scholar
  4. 4.
    W. Hong and J. E. Slotine“Experiments in hand-eye coordination using active vision,” in Proc. 4th Int. Symp. on Experimental Robotics (ISER’95)1995.Google Scholar
  5. 5.
    S. Hutchinson, G. D. Hager, and P. I. Corke, “A tutorial on visual servo control,”Trans. on Robotics and Automationvol. 12, no. 5, pp. 651–670, 1996.CrossRefGoogle Scholar
  6. 6.
    M. Jägersand, O. Fuentles, and R. Nelson, “Acquiring visual-motor models for precision manipulation with robot hands,” inProc. of 4th European Conf. on Computer Vision1996, pp. 603–612.Google Scholar
  7. 7.
    Y. Nakabo, I. Ishii, and M. Ishikawa, “High speed target tracking using lms visual feedback system,” inVideo Proceedings of IEEE Int. Conf. Robotics and AutomationMinneapolis, 1996.Google Scholar
  8. 8.
    A. Namiki, Y. Nakabo, I. Ishii, and M. Ishikawa, “High speed grasping using visual and force feedback,”inProc. IEEE Int. Conf. on Robotics and Automation, 1999.Google Scholar
  9. 9.
    A. Namiki, Y. Nakabo, I. Ishii, and M. Ishikawa, “1ms sensory-motor fusion system,”IEEE/ASME Trans. on Mechatronicsvol. 5, no. 3, pp. 244–252, 2000.CrossRefGoogle Scholar
  10. 10.
    L. Weiss, A. Sanderson, and C.P. Neuman.“Dynamic sensor-based control of robots with visual feedbackIEEE J. of Robotics and Automationvol. RA-3, no. 5, 1987.Google Scholar

Copyright information

© Springer Science+Business Media New York 2003

Authors and Affiliations

  • Koichi Hashimoto
    • 1
  • Akio Namiki
    • 1
  • Masatoshi Ishikawa
    • 1
  1. 1.Department of Information Physics and Computing Graduate School of Information Science and TechnologyThe University of TokyoBunkyo, TokyoJapan

Personalised recommendations