Advertisement

Log-Based Reward Field Function for Deep-Q-Learning for Online Mobile Robot Navigation

  • Arun Kumar Sah
  • Prases K. MohantyEmail author
  • Vikas Kumar
  • Animesh Chhotray
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 711)

Abstract

Path planning is one of the major challenges while designing a mobile robot. In this paper, we implemented Deep-Q-Learning algorithms for autonomous navigation task in wheel mobile robot. We proposed a log-based reward field function to incorporate with Deep-Q-Learning algorithms. The performance of the proposed algorithm is verified in simulated environment and physical environment. Finally, the accuracy of the performance of the obstacle avoidance ability of the robot is measured based on hit rate metrics.

Keywords

Path planning Obstacle avoidance Mobile robot Wheel robot Reward function Q-learning 

References

  1. 1.
    Mohanty, P. K., & Parhi, D. R.: Controlling the motion of an autonomous mobile robot using various techniques: a review. Journal of Advance Mechanical Engineering, 1(1), 24–39 (2013)Google Scholar
  2. 2.
    Mac, T. T., Copot, C., Tran, D. T., & De Keyser, R.: Heuristic approaches in robot path planning: A survey. Robotics and Autonomous Systems, 86, 13–28 (2016)Google Scholar
  3. 3.
    Zadeh, L. A.: Fuzzy sets. Information and control, 8(3), 338–353 (1965)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Vachtsevanos, G., & Hexmoor, H.: A fuzzy logic approach to robotic path planning with obstacle avoidance. In Decision and Control, 1986 25th IEEE Conference on (Vol. 25, pp. 1262–1264) (1986)Google Scholar
  5. 5.
    Valdez, F., Melin, P., & Castillo, O.: A survey on nature-inspired optimization algorithms with fuzzy logic for dynamic parameter adaptation. Expert systems with applications, 41(14), 6459–6466 (2014)Google Scholar
  6. 6.
    Zacksenhouse, M., & Johnson, D. H.: A neural network architecture for cue-based motion planning. In Decision and Control, 1988. Proceedings of the 27th IEEE Conference on (pp. 324–327) (1988)Google Scholar
  7. 7.
    Watkins, C. J. C. H.: Learning from delayed rewards (Doctoral dissertation, University of Cambridge) (1989)Google Scholar
  8. 8.
    Watkins, C. J., & Dayan, P.: Q-learning. Machine learning, 8(3–4), 279–292 (1992)Google Scholar
  9. 9.
    Lin, L. J.: Reinforcement learning for robots using neural networks (Doctoral dissertation, Fujitsu Laboratories Ltd) (1993)Google Scholar
  10. 10.
    Motlagh, O., Nakhaeinia, D., Tang, S. H., Karasfi, B., & Khaksar, W.: Automatic navigation of mobile robots in unknown environments. Neural Computing and Applications, 24(7–8), 1569–1581 (2014)Google Scholar
  11. 11.
    Jaradat, M. A. K., Al-Rousan, M., & Quadan, L.: Reinforcement based mobile robot navigation in dynamic environment. Robotics and Computer-Integrated Manufacturing, 27(1), 135–149 (2011)Google Scholar
  12. 12.
    Dahl, G. E., Yu, D., Deng, L., & Acero, A.: Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Transactions on Audio, Speech, and Language Processing, 20(1), 30–42 (2012)Google Scholar
  13. 13.
    Krizhevsky, A., Sutskever, I., & Hinton, G. E.: Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097–1105) (2012)Google Scholar
  14. 14.
    Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M.: Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 (2013)
  15. 15.
    Vasquez, D., Okal, B., & Arras, K. O.: Inverse reinforcement learning algorithms and features for robot navigation in crowds: an experimental comparison. In Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on (pp. 1341–1346) (2014)Google Scholar
  16. 16.
    Poole, D. L., & Mackworth, A. K.: Artificial Intelligence: foundations of computational agents. Cambridge University Press (2010)Google Scholar
  17. 17.
    Puterman, M. L.: Markov decision processes. Handbooks in operations research and management science, 2, 331–434 (1990)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  • Arun Kumar Sah
    • 1
  • Prases K. Mohanty
    • 1
    Email author
  • Vikas Kumar
    • 1
  • Animesh Chhotray
    • 2
  1. 1.Department of Mechanical EngineeringNational Institute of TechnologyYupiaIndia
  2. 2.Department of Mechanical EngineeringNational Institute of TechnologyRourkelaIndia

Personalised recommendations