Skip to main content

A Bio-Inspired Robotic Mechanism for Autonomous Locomotion in Unconventional Environments

  • Chapter
Autonomous Robotic Systems

Part of the book series: Studies in Fuzziness and Soft Computing ((STUDFUZZ,volume 116))

Abstract

This chapter presents a robotic mechanism aimed at navigating in unconventional environments like rigid aerial lines — power, telephone, railroad — and reticulated structures — ladders, grills, bars, etc —. A novel method of obstacle avoidance for this mechanism is also introduced. The computation of collision-free trajectories generally requires the analytical description of the physical structure of the environment and the solution of the kinematic equations. For dynamic, uncertain environments with unknown obstacles, however, it is very hard to get realtime collision avoidance by means of analytical techniques. The main strength of the proposed method resides, precisely, in that it departs from the analytical approach, as it does not use formal descriptions of the location and shape of the obstacles, nor does it solve the kinematic equations of the mechanism. Instead, the method follows the perception-reason-action paradigm and is based on a reinforcement learning process guided by perceptual feedback, which can be considered as biologically inspired at the functional level. From this perspective, obstacle avoidance is modeled as a multi-objective optimization problem. The method, as shown in the chapter, can be straightforwardly applied to real-time collision avoidance for articulated mechanisms, including conventional manipulator arms.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Saito, F., Fukuda, T., Arai, F. (1994) Swing and locomotion control for a twolink brachiation robot. IEEE Control Syst. Mag, 14, 5–12

    Article  Google Scholar 

  2. Nakanishi, J., Fukuda, T., Koditschek, D.E. (2000) A brachiating robot controller. IEEE Trans. on Robotics and Automation, 16(2), 109–123

    Article  Google Scholar 

  3. Maravall, D., Baumela, L. (1996) Robotic systems with perceptual feedback and anticipatory behavior. In R. Moreno-Diaz, J. Mira-Mira (eds.), Brain Processes, Theories and Models. MIT Press, Cambridge, Massachusetts, 532–540

    Google Scholar 

  4. Albus, J.S. (1975) A new approach to manipulator control: The cerebellar model articulation controller (CMAC). ASME J. of Dynamics Systems, Meas., & Control, 97, 220–227

    Article  MATH  Google Scholar 

  5. Meystel, A.M., Albus, J.S. (2002) Intelligent Systems: Architecture, Design and Control. John Wiley & Sons, New York

    Google Scholar 

  6. Kawato, M. Cerebellum and motor control. (1995) In M.A. Arbib (ed.), The Handbook of Brain Theory and Neural Netwoks. MIT Press, Cambridge, Massachusetts, 172–178

    Google Scholar 

  7. Franklin, S. (1995) Artificial Minds. MIT Press, Cambridge, Massachusetts.

    Google Scholar 

  8. Mel, B.W. (1990) Connectionist Robot Motion Planning. Academic Press, Boston

    MATH  Google Scholar 

  9. Werbos, P.J. (1990) A menu of designs for reinforcement learning over time. In W.T. Miller III, R.S. Sutton, P.J. Werbos (eds.), Neural Networks for Control. MIT Press, Cambridge, Massachusetts, 67–95

    Google Scholar 

  10. Bryson, A.E., Ho, Y.C. (1969) Applied Optimal Control: Optimization, Estimation and Control. Hemisphere, Massachusetts

    Google Scholar 

  11. Westphal, L.C. (1995) Sourcebook of Control Systems Engineering. Chapman & Hall, London

    Book  Google Scholar 

  12. Jang, J.-R.R., Sun, C.-T., Mizutani, E. (1997) Neuro-Fuzzy and Softcomputing. Prentice-Hall, Upper Saddle River, New Jersey

    Google Scholar 

  13. Lu, Y.-Z. (1997) Industrial Intelligent Control. John Wiley & Sons, New York

    Google Scholar 

  14. Sutton, R.S., Barto, A.G. (1998) Reinforcement Learning, MIT Press, Cambridge, Massachusetts

    Google Scholar 

  15. Zhou, C. (2000) Neuro-fuzzy gait synthesis with reinforcement learning for a biped walking robot. Soft Computing, 4, 238–250

    Article  MATH  Google Scholar 

  16. Zhou, C., Yang, Y., Jia, X. (2001) Incorporating perception based information in reinforcement learning using computing with words. In J. Mira, A. Prieto (eds.), Bio-Inspired Applications of Connectionism, LNCS 2085, Springer Verlag, Berlin, 476–483

    Chapter  Google Scholar 

  17. Barto, A.G. (1995) Reinforcement learning in motor control. In M.A. Arbib (ed.), The Handbook of Brain Theory and Neural Networks, MIT Press, Cambridge, Massachusetts, 809–813

    Google Scholar 

  18. Jacob, C. (1999) Stochastic search methods. In M. Berthold, D.J. Hand (eds.), Intelligent Data Analysis, Springer-Verlag, Berlin, 299–350

    Google Scholar 

  19. White, D.A., Sofge, D.A. (1992) Handbook of Intelligent Control, Van Nostrand Reinhold, New York

    Google Scholar 

  20. Chankong, V., Haimes, Y.Y. (1987) Multiple objective optimization: Pareto Optimality. In M.G. Singh (ed.), Systems & Control Encyclopedia, Vol. 5, Pergamon Press, Oxford, 3156–3165

    Google Scholar 

  21. Kang, D.-O. et al. (2001) Multiobjective navigation of a guide mobile robot for the visually impaired based on intention inference of obstacles. Autonomous Robots, 10, 213–230

    Article  MATH  Google Scholar 

  22. Maravall, D., De Lope, J. (2002) A reinforcement learning method for dynamic obstacle avoidance in robotic mechanisms. 5th International Conference on Computational Intelligence Systems for Applied Research, Gent, Belgium, September 16–18, 2002 (to appear)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Maravall, D., de Lope, J. (2003). A Bio-Inspired Robotic Mechanism for Autonomous Locomotion in Unconventional Environments. In: Zhou, C., Maravall, D., Ruan, D. (eds) Autonomous Robotic Systems. Studies in Fuzziness and Soft Computing, vol 116. Physica, Heidelberg. https://doi.org/10.1007/978-3-7908-1767-6_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-7908-1767-6_10

  • Publisher Name: Physica, Heidelberg

  • Print ISBN: 978-3-7908-2523-7

  • Online ISBN: 978-3-7908-1767-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics