Abstract
This chapter presents a robotic mechanism aimed at navigating in unconventional environments like rigid aerial lines — power, telephone, railroad — and reticulated structures — ladders, grills, bars, etc —. A novel method of obstacle avoidance for this mechanism is also introduced. The computation of collision-free trajectories generally requires the analytical description of the physical structure of the environment and the solution of the kinematic equations. For dynamic, uncertain environments with unknown obstacles, however, it is very hard to get realtime collision avoidance by means of analytical techniques. The main strength of the proposed method resides, precisely, in that it departs from the analytical approach, as it does not use formal descriptions of the location and shape of the obstacles, nor does it solve the kinematic equations of the mechanism. Instead, the method follows the perception-reason-action paradigm and is based on a reinforcement learning process guided by perceptual feedback, which can be considered as biologically inspired at the functional level. From this perspective, obstacle avoidance is modeled as a multi-objective optimization problem. The method, as shown in the chapter, can be straightforwardly applied to real-time collision avoidance for articulated mechanisms, including conventional manipulator arms.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Saito, F., Fukuda, T., Arai, F. (1994) Swing and locomotion control for a twolink brachiation robot. IEEE Control Syst. Mag, 14, 5–12
Nakanishi, J., Fukuda, T., Koditschek, D.E. (2000) A brachiating robot controller. IEEE Trans. on Robotics and Automation, 16(2), 109–123
Maravall, D., Baumela, L. (1996) Robotic systems with perceptual feedback and anticipatory behavior. In R. Moreno-Diaz, J. Mira-Mira (eds.), Brain Processes, Theories and Models. MIT Press, Cambridge, Massachusetts, 532–540
Albus, J.S. (1975) A new approach to manipulator control: The cerebellar model articulation controller (CMAC). ASME J. of Dynamics Systems, Meas., & Control, 97, 220–227
Meystel, A.M., Albus, J.S. (2002) Intelligent Systems: Architecture, Design and Control. John Wiley & Sons, New York
Kawato, M. Cerebellum and motor control. (1995) In M.A. Arbib (ed.), The Handbook of Brain Theory and Neural Netwoks. MIT Press, Cambridge, Massachusetts, 172–178
Franklin, S. (1995) Artificial Minds. MIT Press, Cambridge, Massachusetts.
Mel, B.W. (1990) Connectionist Robot Motion Planning. Academic Press, Boston
Werbos, P.J. (1990) A menu of designs for reinforcement learning over time. In W.T. Miller III, R.S. Sutton, P.J. Werbos (eds.), Neural Networks for Control. MIT Press, Cambridge, Massachusetts, 67–95
Bryson, A.E., Ho, Y.C. (1969) Applied Optimal Control: Optimization, Estimation and Control. Hemisphere, Massachusetts
Westphal, L.C. (1995) Sourcebook of Control Systems Engineering. Chapman & Hall, London
Jang, J.-R.R., Sun, C.-T., Mizutani, E. (1997) Neuro-Fuzzy and Softcomputing. Prentice-Hall, Upper Saddle River, New Jersey
Lu, Y.-Z. (1997) Industrial Intelligent Control. John Wiley & Sons, New York
Sutton, R.S., Barto, A.G. (1998) Reinforcement Learning, MIT Press, Cambridge, Massachusetts
Zhou, C. (2000) Neuro-fuzzy gait synthesis with reinforcement learning for a biped walking robot. Soft Computing, 4, 238–250
Zhou, C., Yang, Y., Jia, X. (2001) Incorporating perception based information in reinforcement learning using computing with words. In J. Mira, A. Prieto (eds.), Bio-Inspired Applications of Connectionism, LNCS 2085, Springer Verlag, Berlin, 476–483
Barto, A.G. (1995) Reinforcement learning in motor control. In M.A. Arbib (ed.), The Handbook of Brain Theory and Neural Networks, MIT Press, Cambridge, Massachusetts, 809–813
Jacob, C. (1999) Stochastic search methods. In M. Berthold, D.J. Hand (eds.), Intelligent Data Analysis, Springer-Verlag, Berlin, 299–350
White, D.A., Sofge, D.A. (1992) Handbook of Intelligent Control, Van Nostrand Reinhold, New York
Chankong, V., Haimes, Y.Y. (1987) Multiple objective optimization: Pareto Optimality. In M.G. Singh (ed.), Systems & Control Encyclopedia, Vol. 5, Pergamon Press, Oxford, 3156–3165
Kang, D.-O. et al. (2001) Multiobjective navigation of a guide mobile robot for the visually impaired based on intention inference of obstacles. Autonomous Robots, 10, 213–230
Maravall, D., De Lope, J. (2002) A reinforcement learning method for dynamic obstacle avoidance in robotic mechanisms. 5th International Conference on Computational Intelligence Systems for Applied Research, Gent, Belgium, September 16–18, 2002 (to appear)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Maravall, D., de Lope, J. (2003). A Bio-Inspired Robotic Mechanism for Autonomous Locomotion in Unconventional Environments. In: Zhou, C., Maravall, D., Ruan, D. (eds) Autonomous Robotic Systems. Studies in Fuzziness and Soft Computing, vol 116. Physica, Heidelberg. https://doi.org/10.1007/978-3-7908-1767-6_10
Download citation
DOI: https://doi.org/10.1007/978-3-7908-1767-6_10
Publisher Name: Physica, Heidelberg
Print ISBN: 978-3-7908-2523-7
Online ISBN: 978-3-7908-1767-6
eBook Packages: Springer Book Archive