A General Learning Approach to Visually Guided 3D-Positioning and Pose Control of Robot Arms
We describe a general learning approach to fine-positioning of a robot gripper in three-dimensional workspace using visual sensor data. This is a two-step approach: a) a hybrid representation for encoding the robot state perceived by visual sensors; b) partitioning the action space of the robot to let multiple specialized controllers evolve.
The input encoding consists of representing position by geometric features and uniquely describing orientation by combination of principal components. Such a dimension-reduction procedure is essential to apply supervised as well as reinforcement learning. A fuzzy controller based on B-spline models serves as a function approximator using this encoded input and producing the motion parameters as outputs.
A complex positioning and pose control task is divided into consecutive sub-tasks. Each subtask is solved by a specialized self-learning controller. The approach has been successfully applied to control 6-axes-robots translating the gripper in the three-dimensional workspace and rotating it about the z-axis. Instead of undergoing cumbersome hand-eye calibration processes, our system lets the controllers evolve using systematic perturbation motion around the desired position and orientation.
KeywordsFuzzy Controller Atan2 Function Cerebellar Model Articulation Controller Robot State Effective Dimension Reduction
Unable to display preview. Download preview PDF.
- 2.M.J. Black and Allan D. Jepson. “EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation,” Proceedings of the ECCV’96, Cambridge, pp. 329–342, 1996.Google Scholar
- 3.I. KamonT. Flash, and S. Edelman. Learning to grasp using visual information. In Proceedings of the IEEE International Conference on Robotics and Automationpages 2470–2476, 1996.Google Scholar
- 4.Leslie Pack Kaelbling, Michael L. Littman, and Andrew W. Moore. Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research, 4:237–285, Mai 1996.Google Scholar
- 5.I. Kamon, T. Flash, and S. Edelman. Learning visually guided grasping: A test case in sensorimotor learning. IEEE Transactions on System, Man and Cybernetics, 28 (3): 266–276, May 1998.Google Scholar
- 6.A. Knoll, J. Zhang, T. Graf and A. Wolfram. Recognition of Occluded Objects and Visual Servoing: Two Case Studies of Emplying Fuzzy Techniques in Robot Vision. In “Fuzzy Techniques in Image Processing”, edited by E.E. Kerre and M. Nachtegael, Springer Verlag, 2000. Google Scholar
- 8.S. K. Nayar, H. Murase, and S. A. Nene. “Learning, positioning, and tracking visual appearance,” Proceedings of the IEEE International Conference on Robotics and Automation, pp. 3237–3244, 1994.Google Scholar
- 9.T. Sanger. An optimality principle for unsupervised learning. Advances in neural information processing systems 1. D. S. Touretzky (ed.), Morgan Kaufmann, San Mateo, CA, 1989.Google Scholar
- 10.Sebastian Thrun and Anton Schwartz. Finding Structure in Reinforcement Learning. In Advances in Neural Information Processing Systems 7, pages 385–392, 1995.Google Scholar
- 11.G.-Q. Wei, G. Hirzinger, and B. Brunner. Sensorimotion coordination and sensor fusion by neural networks. In Proc. IEEE Int. Conf. Neural Networks San Franciscopages 150–155, 1993. Google Scholar