Open Space Attraction Based Navigation in Dark Tunnels for MAVs

  • Christoforos KanellakisEmail author
  • Petros Karvelis
  • George Nikolakopoulos
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11754)


This work establishes a novel framework for characterizing the open space of featureless dark tunnel environments for Micro Aerial Vehicles (MAVs) navigation tasks. The proposed method leverages the processing of a single camera to identify the deepest area in the scene in order to provide a collision free heading command for the MAV. In the sequel and inspired by haze removal approaches, the proposed novel idea is structured around a single image depth map estimation scheme, without metric depth measurements. The core contribution of the developed framework stems from the extraction of a 2D centroid in the image plane that characterizes the center of the tunnel’s darkest area, which is assumed to represent the open space, while the robustness of the proposed scheme is being examined under varying light/dusty conditions. Simulation and experimental results demonstrate the effectiveness of the proposed method in challenging underground tunnel environments [1].


Depth map estimation Open space attraction Visual navigation Micro Aerial Vehicles 


  1. 1.
    DARPA SubTerranean Challenge. Accessed 06 May 2019
  2. 2.
    Cozman, F., Krotkov, E.: Depth from scattering. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 801–806 (June 1997).
  3. 3.
    Gonzalez, R.C., Woods, R.E.: Digital Image Processing, 3rd edn. Prentice-Hall Inc., Upper Saddle River (2006)Google Scholar
  4. 4.
    He, K., Sun, J., Tang, X.: Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011). Scholar
  5. 5.
    Khattak, S., Papachristos, C., Alexis, K.: Vision-depth landmarks and inertial fusion for navigation in degraded visual environments. In: Bebis, G., et al. (eds.) ISVC 2018. LNCS, vol. 11241, pp. 529–540. Springer, Cham (2018). Scholar
  6. 6.
    Koenig, N., Howard, A.: Design and use paradigms for gazebo, an open-source multi-robot simulator. In: 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No. 04CH37566), vol. 3, pp. 2149–2154. IEEE (2004)Google Scholar
  7. 7.
    Mascarich, F., Khattak, S., Papachristos, C., Alexis, K.: A multi-modal mapping unit for autonomous exploration and mapping of underground tunnels. In: 2018 IEEE Aerospace Conference, pp. 1–7. IEEE (2018)Google Scholar
  8. 8.
    Özaslan, T., et al.: Autonomous navigation and mapping for inspection of penstocks and tunnels with MAVs. IEEE Robot. Autom. Lett. 2(3), 1740–1747 (2017)CrossRefGoogle Scholar
  9. 9.
    Soille, P.: Morphological Image Analysis: Principles and Applications, 2nd edn. Springer, Berlin (2003). Scholar
  10. 10.
    Tan, C.H., Sufiyan, D., Ang, W.J., Win, S.K.H., Foong, S.: Design optimization of sparse sensing array for extended aerial robot navigation in deep hazardous tunnels. IEEE Robot. Autom. Lett. 4(2), 862–869 (2019)CrossRefGoogle Scholar
  11. 11.
    Tan, R.T.: Visibility in bad weather from a single image. In: 2008 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (June 2008).
  12. 12.
    Theodoridis, S., Koutroumbas, K.: Pattern Recognition, 4th edn. Academic Press Inc., Orlando (2008)zbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Christoforos Kanellakis
    • 1
    Email author
  • Petros Karvelis
    • 1
  • George Nikolakopoulos
    • 1
  1. 1.Robotics Team, Department of Computer, Electrical and Space EngineeringLuleå University of TechnologyLuleåSweden

Personalised recommendations