Skip to main content
Log in

Path Planning for a Statically Stable Biped Robot Using PRM and Reinforcement Learning

  • Published:
Journal of Intelligent and Robotic Systems Aims and scope Submit manuscript

Abstract

In this paper path planning and obstacle avoidance for a statically stable biped robot using PRM and reinforcement learning is discussed. The main objective of the paper is to compare these two methods of path planning for applications involving a biped robot. The statically stable biped robot under consideration is a 4-degree of freedom walking robot that can follow any given trajectory on flat ground and has a fixed step length of 200 mm. It is proved that the path generated by the first method produces the shortest smooth path but it also increases the computational burden on the controller, as the robot has to turn at almost all steps. However the second method produces paths that are composed of straight-line segments and hence requires less computation for trajectory following. Experiments were also conducted to prove the effectiveness of the reinforcement learning based path planning method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Barraquand, J., Kavraki, L.E., Latombe, J.C., Raghavan, P.: A random sampling scheme for path planning. Int. J. Rob. Autom. 16(6), 759–774 (1997)

    Google Scholar 

  2. Barraquand, J., Kavraki, L.E., Latombe, J.C.: Numerical potential field techniques for robot path planning. IEEE Trans. Syst. Man Cybern. 22(2), 224–241 (1992)

    Article  Google Scholar 

  3. Beom, H., Cho, H.: A sensor-based navigation for a mobile robot using fuzzy logic and reinforcement learning. IEEE Trans. Syst. Man Cybern., SMC-25(3), 464–477 (1995)

    Article  Google Scholar 

  4. Clark, M., Rock, S.M., Latombe, J.C.: Motion planning for multiple robot systems using dynamic networks. Proceedings of the IEEE International Conference on Robotics and Automation (2003)

  5. Dulimarta, H., Tummala, R.L.: Design and control of miniature climbing robots with non-holonomic constraints. Proceedings of the World Congress on Intelligent Control and Automation, Shanghai, pp. 3267–3271 (2001)

  6. Gosavi, A.: A Tutorial for Reinforcement Learning. The State University of New York at Buffalo

  7. Kavraki, L.E., Svestka, P., Latombe, J.C., Overman, M.: Probabilistic roadmap for path planning in high dimensional configuration space. IEEE Trans. Robot. Autom. 12(4), 566–580 (1996)

    Article  Google Scholar 

  8. Kuffner, J., Nishiwaki, K., Kagami, S., Inaba, M.: Footstep planning among obstacles for biped robots. Proceedings of IROS (2001)

  9. Kuffner, J.J., Kagami, S., Inaba, M., Inoue, H.: Dynamically stable motion planning for humanoid robots. Auton. Robots. 12(1), 105–118 (2002)

    Article  MATH  Google Scholar 

  10. Kalisiak, M., VanDe Panne, M.: A grasp-based motion planning algorithm for character animation. J. Viz. Comput. Animat. 12(3), 117–129 (2001)

    Article  MATH  Google Scholar 

  11. Latombe, J.C.: Robot Motion Planning. Kluwer, Academic (1991).

  12. Lee, C.T., Lee, C.S.G.: Reinforcement structural parameter learning for neural-network based fuzzy logic control system. IEEE Trans. Fuzzy Syst. 2(1), 46–63 (1994)

    Article  MATH  Google Scholar 

  13. Macek, K., Petrovic, I., Peric, N.: A reinforcement learning approach to obstacle avoidance of mobile robots Advanced Motion Control, 2002. 7th International Workshop on 3–5 July 2002, pp. 462–466

  14. Nishikawa, K., Sugihara, T., Kagami, S., Inaba, M., Inoue, H.: Online mixture and connection of basic motions for humanoid walking control by footprint specifications. Proceedings of the International Conference on Robotics and Automation (2001)

  15. Schwartz, J.T., Shirir, M.: A survey of motion planning and related geometric algorithm. Artificial Intelligence Journal 37, 157–169 (1988)

    Article  MATH  Google Scholar 

  16. Sehad, S., Touzet, C.: Self-organizing Map For Reinforcement Learning: Obstacle-avoidance With Khepera From Perception to Action Conference, 1994. Proceedings, pp. 420–424 September 7–9 (1994)

  17. Wuril, C.: Multiple robot path coordination using artificial potential fields. Proceedings of the IEEE International Conference on Robotics and Automation, pp. 500–505 (1990)

  18. Yagi, M., Lumelsky, V.: Biped robot locomotion within scenes with unknown obstacles. Proceedings of the IEEE International Conference on Robotics and Automation, pp. 375–380 (1999)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ashish Dutta.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kulkarni, P., Goswami, D., Guha, P. et al. Path Planning for a Statically Stable Biped Robot Using PRM and Reinforcement Learning. J Intell Robot Syst 47, 197–214 (2006). https://doi.org/10.1007/s10846-006-9071-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10846-006-9071-3

Key words

Navigation