UAV – Virtual Migration Based on Obstacle Avoidance Model

  • Ci-Fong HeEmail author
  • Chin-Feng Lai
  • Shau-Yin Tseng
  • Ying Hsun Lai
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 1227)


In recent years, the obstacles avoidance technology of unmanned aerial vehicles has been developed rapidly. It takes a lot of manpower to control un-manned aerial vehicles, so many researches use reinforcement learning to make unmanned aerial vehicles fly autonomously. In the real environment using rein-for cement learning to train aircraft is an expensive and time-consuming work, because reinforcement learning is a way to learn from mistakes, so there are often bumps in the learning process. In Wu’s research, they trained a good model, but the realistic environment and simulation environment differs very big, so we will train this model again and transferred to the real environment, makes unmanned aerial vehicle in the realistic environment can use cheaper and quickly achieve the same task.


Unmanned aerial vehicles Reinforcement learning Obstacle avoidance 


  1. 1.
    Chiang, M., Zhang, T.: Fog and IoT: an overview of research opportunities. IEEE Internet Things J. 3, 854–864 (2016). Scholar
  2. 2.
    Bachrach, A., He, R., Roy, N.: Autonomous flight in unknown indoor environments. Int. J. Micro Air Veh. 1, 217–228 (2010). Scholar
  3. 3.
    Bills, C., Chen, J., Saxena, A.: Autonomous MAV flight in indoor environments using single image perspective cues. In: Proceedings of the IEEE International Conference Robot Automation, pp. 5776–5783 (2011).
  4. 4.
    Pham, H.X., La, H.M., Feil-Seifer, D., Nguyen, L.V.: Autonomous UAV navigation using reinforcement learning. (2018)Google Scholar
  5. 5.
    Wu, T.C., Tseng, S.Y., Lai, C.F., Ho, C.Y., Lai, Y.H.: Navigating assistance system for quadcopter with deep reinforcement learning. In: Proceedings of the 2018 1st International Cognitive Cities Conference IC3 2018, pp. 16–19 (2018).
  6. 6.
    Koenig, N., Howard, A.: Design and use paradigms for gazebo, an open-source multi-robot simulator, pp. 2149–2154 (2005).
  7. 7.
    Lin, C., Liu, D., Wu, X., He, Z., Wang, W., Li, W.: Setup and performance of a combined hardware-in-loop and software-in-loop test for MMC-HVDC control and protection system. In: 9th International Conference Power Electron Asia Green World with Power Electron. ICPE-ECCE Asia, pp. 1333–1338 (2015).
  8. 8.
    Meier, L., Tanskanen, P., Fraundorfer, F., Pollefeys, M.: PIXHAWK: a system for autonomous flight using onboard computer vision. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 2992–2997 (2011).
  9. 9.
    Shah, S., Dey, D., Lovett, C., Kapoor, A.: AirSim: High-Fidelity Visual and Physical Simulation for Autonomous Vehicles, pp. 621–635 (2017)Google Scholar
  10. 10.
    Kahn, G., Villaflor, A., Pong, V., Abbeel, P., Levine, S.: Uncertainty-Aware Reinforcement Learning for Collision Avoidance. (2017)Google Scholar
  11. 11.
    Wu, Z., Li, J., Zuo, J., Li, S.: Path planning of UAVs based on collision probability and kalman filter. IEEE Access. 6, 34237–34245 (2018). Scholar
  12. 12.
    Lei, X., Jiang, X., Wang, C.: Design and implementation of a real-time video stream analysis system based on FFMPEG. In: Proceedings of the 2013 4th World Congress on Software Engineering WCSE 2013, pp. 212–216 (2013).
  13. 13.
    Lubow, B.C.: Linked references are available on JSTOR for this article : SDP : Generalized software for solving stochastic dynamic optimization problems. 23, 738–742 (2018)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  • Ci-Fong He
    • 1
    Email author
  • Chin-Feng Lai
    • 1
  • Shau-Yin Tseng
    • 2
  • Ying Hsun Lai
    • 3
  1. 1.National Cheng Kung UniversityTainanTaiwan
  2. 2.ITRI/ICLChutung, HsinchuTaiwan
  3. 3.National Taitung UniversityTaitung City, Taitung County 950Taiwan

Personalised recommendations