Skip to main content
Log in

A study on the optimal route design considering time of mobile robot using recurrent neural network and reinforcement learning

  • Published:
Journal of Mechanical Science and Technology Aims and scope Submit manuscript

Abstract

Recently, the robots market is growing rapidly, and robots are being applied in various industrial fields. In the future, robots will work in more complex and diverse environments. For example, a robot can perform one or more tasks and collaborate with people or other robots. In this situation, the path planning for the robots to perform their tasks efficiently is an important issue. In this study, we assume that the mobile robot performs one or more tasks, moves various places freely, and works with other robots. In this situation, if the path of the mobile robot is planned with the shortest path algorithm, waiting time may occur because the planned path is blocked by other robots. Sometimes it is possible to complete a task in a shorter time than returning or performing another task first. That is, the shortest path and the shortest path do not coincide with each other. The purpose of this study is to construct a network in which the mobile robot designs the shortest path planning considering shortest time by judging itself based on environment information and path planning information of other robots. For this purpose, a network is constructed using a recurrent neural network and reinforcement learning is used. We established the environment for network learning using the robot simulation program, V-Rep. We compare the effects of various network structures and select network models that meet the purpose. In the future work, we will try to prove the effect of network by comparing existing algorithm and network.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. N. Ku, S. Ha and M. I. Roh, Design of controller for mobile robot in welding process of shipbuilding engineering, Journal of Computational Design and Engineering, 1 (4) (2014) 243–255.

    Article  Google Scholar 

  2. V. Roberge, M. Tarbouchi and G. Labonte, Comparison of parallel genetic algorithm and particle swarm optimization for real-time uav path planning, IEEE Transactions on Industrial Informatics, 9 (1) (2013) 132–141.

    Article  Google Scholar 

  3. Y. Zhang, D. W. Gong and J. H. Zhang, Robot path planning in uncertain environment using multi-objective particle swarm optimization, Neurocomputing, 103 (2013) 172–185.

    Article  Google Scholar 

  4. M. A. Hossain and I. Ferdous, Autonomous robot path planning in dynamic environment using a new optimization technique inspired by bacterial foraging technique, Robotics and Autonomous Systems, 64 (2015) 137–141.

    Article  Google Scholar 

  5. M. A. Contreras-Cruz, V. Ayala-Ramirez and U. H. Hernandez-Belmonte, Mobile robot path planning using artificial bee colony and evolutionary programming, Applied Soft Computing, 30 (2015) 319–328.

    Article  Google Scholar 

  6. F. Duchon, A. Babinec, M. Kajan, P. Beno, M. Florek, T. Fico and L. Jurisica, Path planning with modified a star algorithm for a mobile robot, Procedia Engineering, 96 (2014) 59–69.

    Article  Google Scholar 

  7. N. Cao, K. H. Low and J. M. Dolan, Multi-robot informative path planning for active sensing of environmental phenomena: A tale of two algorithms, Proceedings of the 2013 International Conference on Autonomous Agents and Multi-Agent Systems (2013) 7–14.

    Google Scholar 

  8. J. W. Lee, D. H. Lee and J. J. Lee, Global path planning using improved ant colony optimization algorithm through bilateral cooperative exploration, Proc. of the 5th IEEE International Conference on Digital Ecosystems and Technology Conference (DEST) (2011) 109–113.

    Google Scholar 

  9. D. L. Cruz and W. Yu, Path planning of multi-agent systems in unknown environment with neural kernel smoothing and reinforcement learning, Neurocomputing, 232 (2017) 34–42.

    Article  Google Scholar 

  10. B. Bakker, Reinforcement learning with LSTM in non-Markovian tasks with long-term dependencies, Technical report, Dept. of Psychology, Leiden University (2001).

    Google Scholar 

  11. S. Y. Fu, L. W. Han, Y. Tian and G. S. Yang, Path planning for unmanned aerial vehicle based on genetic algorithm, 2012 IEEE 11th International Conference on Cognitive Informatics & Cognitive Computing, IEEE (2012) 140–144.

    Chapter  Google Scholar 

  12. J. Ni, X. Li, X. Fan and J. Shen, A dynamic risk level based bioinspired neural network approach for robot path planning, World Automation Congress (2014) 829–833.

    Google Scholar 

  13. D. Zhu, W. Li, M. Yan and S. X. Yang, The path planning of AUV based on DS information fusion map building and bio-inspired neural network in unknown dynamic environment, International Journal of Advanced Robotic Systems, 11 (3) (2014).

    Google Scholar 

  14. S. X. Yang and C. Luo, A neural network approach to complete coverage path planning, IEEE Transactions on Systems, Man and Cybernetics, 34 (1) (2004) 718–724.

    Article  Google Scholar 

  15. H. Qu, S. X. Yang, A. R. Willms and Z. Yi, Real-time robot path planning based on a modified pulse-coupled neural network model, IEEE Transactions on Neural Networks, 20 (11) (2009) 1724–1739.

    Article  Google Scholar 

  16. R. Kala, A. Shukla, R. Tiwari, S. Rungta and R. R. Janghel, Mobile robot navigation control in moving obstacle environment using genetic algorithm, artificial neural networks and A* algorithm, Computer Science and Information Engineering, 2009 WRI World Congress, 4 (2009) 705–713.

    Article  Google Scholar 

  17. D. Xin, C. Hua-hua and G. Wei-kang, Neural network and genetic algorithm based global path planning in a static environment, Journal of Zhejiang University-Science A, 6 (6) (2005) 549–554.

    Article  Google Scholar 

  18. L. E. Zarate, M. Becker, B. D. M. Garrido and H. S. C. Rocha, An artificial neural network structure able to obstacle avoidance behavior used in mobile robots, IECON 02 [IEEE 2002 28th Annual Conference of the Industrial Electronics Society], 3 (2002) 2457–2461.

    Article  Google Scholar 

  19. S. J. Huang, S. Liu and C. H. Wu, Intelligent humanoid mobile robot with embedded control and stereo visual feedback, Journal of Mechanical Science and Technology, 29 (9) (2015) 3919–3931.

    Article  Google Scholar 

  20. H. Brahmi, B. Ammar and A. M. Alimi, Intelligent path planning algorithm for autonomous robot based on recurrent neural networks, International Conference Advanced Logistics and Transport (ICALT) (2013) 199–204.

    Google Scholar 

  21. T. W. Chow and Y. Fang, A recurrent neural-networkbased real-time learning control strategy applying to nonlinear systems with unknown dynamics, IEEE Transactions on Industrial Electronics, 45 (1) (1998) 151–161.

    Article  Google Scholar 

  22. Y. Pan and J. Wang, Model predictive control of unknown nonlinear dynamical systems based on recurrent neural networks, IEEE Transactions on Industrial Electronics, 59 (8) (2012) 3089–3101.

    Article  MathSciNet  Google Scholar 

  23. B. Zhang, Z. Mao, W. Liu and J. Liu, Geometric reinforcement learning for path planning of UAVs, Journal of Intelligent & Robotic Systems, 77 (2) (2015) 391–409.

    Article  Google Scholar 

  24. E. Rohmer, S. P. Singh and M. Freese, V-REP: A versatile and scalable robot simulation framework, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE (2013) 1321–1326.

    Chapter  Google Scholar 

  25. H. T. Hwang, S. H. Lee, J. W. Lee, H. C. Kim, M. H. Woo, K. W. Moon, J. Lu, H. S. Ohk, J. K. Kim and H. W. Suh, Development of a transportability evaluation system using swept path analysis and multi-body dynamic simulation, Journal of Mechanical Science and Technology, 31 (11) (2017) 5359–5365.

    Article  Google Scholar 

  26. P. R. Wurman, R. D’Andrea and M. Mountz, Coordinating hundreds of cooperative, autonomous vehicles in warehouses, AI Magazine, 29 (1) (2008).

    Google Scholar 

  27. A. Tudico, N. Lau, E. Pedrosa, F. Amaral, C. Mazzotti and M. Carricato, Improving and benchmarking motion planning for a mobile manipulator operating in unstructured environments, Portuguese Conference on Artificial Intelligence (2017) 498–509.

    Google Scholar 

  28. V. Mnih et al., Human-level control through deep reinforcement learning, Nature, 518 (7540) (2015) 529–533.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Soo-Hong Lee.

Additional information

Recommended by Associate Editor Yang Shi

Min Hyuk Woo is currently studying his Master’s degree at Yonsei University in Seoul, Korea. He received his bachelor’s degree in mechanical engineering from Yonsei University in 2013. His current research interests include PLM, Web-based Collaborative Design, CAD/CAM.

Soo-Hong Lee is currently as a Fulltime Professor at the Department of Mechanical Engineering, Yonsei University in Seoul, Korea. He received his bachelor’s degree in mechanical engineering from Seoul National University in 1981 and his master’s degree in mechanical engineering design from Seoul National University in 1983. He completed his Ph.D. from Stanford University, Califor-nia, USA, in 1991. His current research interests include Intelligent CAD, Knowledgebased Engineering Design, Concurrent Engineering, Product Design Management, Product Lifecycle Management, Artificial Intelligence in Design, and Design Automation.

Hye Min Cha is a graduate student at the Department of Integrated Engineering in Yonsei University in Seoul, Korea. She received her bachelor’s degree in Mechanical Engineering and Department of Human Environment Design in Yonsei University in 2016. Her current research is related in machine learning, Knowledge-based Engineering Design, CAD/CAM and concurrent design.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Woo, M.H., Lee, SH. & Cha, H.M. A study on the optimal route design considering time of mobile robot using recurrent neural network and reinforcement learning. J Mech Sci Technol 32, 4933–4939 (2018). https://doi.org/10.1007/s12206-018-0941-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12206-018-0941-y

Keywords

Navigation