Advertisement

Intelligent Service Robotics

, Volume 12, Issue 2, pp 137–148 | Cite as

Challenges and implemented technologies used in autonomous drone racing

  • Hyungpil MoonEmail author
  • Jose Martinez-Carranza
  • Titus Cieslewski
  • Matthias Faessler
  • Davide Falanga
  • Alessandro Simovic
  • Davide Scaramuzza
  • Shuo Li
  • Michael Ozo
  • Christophe De Wagter
  • Guido de Croon
  • Sunyou Hwang
  • Sunggoo Jung
  • Hyunchul Shim
  • Haeryang Kim
  • Minhyuk Park
  • Tsz-Chiu Au
  • Si Jung Kim
Original Research Paper

Abstract

Autonomous drone racing (ADR) is a challenge for autonomous drones to navigate a cluttered indoor environment without relying on any external sensing in which all the sensing and computing must be done with onboard resources. Although no team could complete the whole racing track so far, most successful teams implemented waypoint tracking methods and robust visual recognition of the gates of distinct colors because the complete environmental information was given to participants before the events. In this paper, we introduce the purpose of ADR as a benchmark testing ground for autonomous drone technologies and analyze challenges and technologies used in the two previous ADRs held in IROS 2016 and IROS 2017. Five teams which participated in these events present their implemented technologies that cover modified ORB-SLAM, robust alignment method for waypoints deployment, sensor fusion for motion estimation, deep learning for gate detection and motion control, and stereo-vision for gate detection.

Keywords

Autonomous drone Drone racing Autonomous flight Autonomous navigation 

Notes

Acknowledgements

J. Martinez-Carranza is thankful for the funding received by the Royal Society through the Newton Advanced Fellowship with reference NA140454. Team UZH thanks Elia Kaufmann, Antoni Rosinol Vidal, and Henri Rebecq for their great help in the software implementation and integration. Team of TU Delft would like to thank the organizers of the Autonomous Drone Race event. Team UNIST’s work was supported by NRF (2.180186.01 and 2.170511.01). All authors would like to thank the organizers of the Autonomous Drone Racing.

References

  1. 1.
    Moon H, Sun Y, Baltes J, Kim SJ (2017) The \(\text{ IROS }\) 2016 competitions. IEEE Robot Autom Mag 24(1):20–29CrossRefGoogle Scholar
  2. 2.
    Brisset P, Drouin A, Gorraz M, Huard P-S, Tyler J (2006) The paparazzi solution. In: 2nd US-European competition and workshop on micro air vehicles (MAV)Google Scholar
  3. 3.
    Mur-Artal R, Montiel JMM, Tardós JD (2015) ORB-SLAM: a versatile and accurate monocular slam system. IEEE Trans Robot 31(5):1147–1163CrossRefGoogle Scholar
  4. 4.
    Rojas-Perez LO, Martinez-Carranza J (2017) Metric monocular SLAM and colour segmentation for multiple obstacle avoidance in autonomous flight. In: IEEE 4th workshop on research, education and development of unmanned aerial systems (RED-UAS), OctoberGoogle Scholar
  5. 5.
    Chen Y, Medioni G (1992) Object modelling by registration of multiple range images. Image Vis Comput 10(3):145–155CrossRefGoogle Scholar
  6. 6.
    Foley JD, Van Dam A (1982) Fundamentals of interactive computer graphics. Addison-Wesley Longman Publishing Co., Inc., BostonGoogle Scholar
  7. 7.
    Faessler M, Franchi A, Scaramuzza D (2018) Differential flatness of quadrotor dynamics subject to rotor drag for accurate tracking of high-speed trajectories. IEEE Robot Autom Lett 3(2):620–626CrossRefGoogle Scholar
  8. 8.
    de Croon G, Perçin M, Remes B, Ruijsink R, De Wagter C (2016) The DelFly: design, aerodynamics, and artificial intelligence of a flapping wing robot. Springer, BerlinCrossRefGoogle Scholar
  9. 9.
    Jung S, Cho S, Lee D, Lee H, Shim DH (2017) A direct visual servoing-based framework for the 2016 IROS Autonomous Drone Racing Challenge. J Field Robot 35(1):146–166CrossRefGoogle Scholar
  10. 10.
    Jung S, Hwang S, Shin H, Shim DH (2018) Perception, guidance and navigation for indoor autonomous drone racing using deep learning. IEEE Robot Autom Lett 3(3):2539–2544CrossRefGoogle Scholar
  11. 11.
    Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C-Y, Berg AC (2016) SSD: single shot multibox detector. In: European conference on computer vision (ECCV). Springer, pp 21–37Google Scholar
  12. 12.
    Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Computer vision and pattern recognition (CVPR)Google Scholar
  13. 13.
    Krizhevsky A, Sutskever I, Hinton GE (2012) ImageNet classification with deep convolutional neural networks. In: Advances in neural information processing systems (NIPS), pp 1097–1105Google Scholar
  14. 14.
    Szegedy C, Ioffe S, Vanhoucke V, Alemi AA (2017) Inception-v4, Inception-ResNet and the impact of residual connections on learning. In: AAAI conference on artificial intelligence (AAAI), pp 4278–4284Google Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  • Hyungpil Moon
    • 1
    Email author
  • Jose Martinez-Carranza
    • 2
  • Titus Cieslewski
    • 3
  • Matthias Faessler
    • 3
  • Davide Falanga
    • 3
  • Alessandro Simovic
    • 3
  • Davide Scaramuzza
    • 3
  • Shuo Li
    • 4
  • Michael Ozo
    • 4
  • Christophe De Wagter
    • 4
  • Guido de Croon
    • 4
  • Sunyou Hwang
    • 5
  • Sunggoo Jung
    • 5
  • Hyunchul Shim
    • 5
  • Haeryang Kim
    • 6
  • Minhyuk Park
    • 6
  • Tsz-Chiu Au
    • 6
  • Si Jung Kim
    • 7
  1. 1.Sungkyunkwan UniversitySuwonKorea
  2. 2.Instituto Nacional de Astrofisica Optica y Electronica (INAOE)PueblaMexico
  3. 3.University of ZurichZurichSwitzerland
  4. 4.Micro Air Vehicle Laboratory, Faculty of Aerospace EngineeringTU DelftDelftThe Netherlands
  5. 5.KAISTDaejeonKorea
  6. 6.UNISTUlsanKorea
  7. 7.UNLVLas VegasUSA

Personalised recommendations