Advertisement

Image Enhancing in Poorly Illuminated Subterranean Environments for MAV Applications: A Comparison Study

  • Christoforos KanellakisEmail author
  • Petros Karvelis
  • George Nikolakopoulos
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11754)

Abstract

This work focuses on a comprehensive study and evaluation of existing low-level vision techniques for low light image enhancement, targeting applications in subterranean environments. More specifically, an emerging effort is currently pursuing the deployment of Micro Aerial Vehicles in subterranean environments for search and rescue missions, infrastructure inspection and other tasks. A major part of the autonomy of these vehicles, as well as the feedback to the operator, has been based on the processing of the information provided from onboard visual sensors. Nevertheless, subterranean environments are characterized by a low natural illumination that directly affects the performance of the utilized visual algorithms. In this article, an novel extensive comparison study is presented among five State-of the-Art low light image enhancement algorithms for evaluating their performance and identifying further developments needed. The evaluation has been performed from datasets collected in real underground tunnel environments with challenging conditions from the onboard sensor of a MAV.

Keywords

Low light imaging Image enhancement Subterranean MAV applications 

References

  1. 1.
    Chandra, M.A., Acharya, B., Khan, M.I.: Retinex image processing: improving the visual realism of color images (2011)Google Scholar
  2. 2.
    Chen, C., Chen, Q., Xu, J., Koltun, V.: Learning to see in the dark. arXiv preprint arXiv:1805.01934 (2018)
  3. 3.
    Gohl, P., et al.: Towards autonomous mine inspection. In: 2014 3rd International Conference on Applied Robotics for the Power Industry (CARPI), pp. 1–6. IEEE (2014)Google Scholar
  4. 4.
    Guo, X., Li, Y., Ling, H.: LIME: low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 26(2), 982–993 (2017).  https://doi.org/10.1109/TIP.2016.2639450MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Hore, A., Ziou, D.: Image quality metrics: PSNR vs. SSIM. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369. IEEE (2010)Google Scholar
  6. 6.
    Jobson, D.J., Rahman, Z., Woodell, G.A.: A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 6(7), 965–976 (1997).  https://doi.org/10.1109/83.597272CrossRefGoogle Scholar
  7. 7.
    Lal, S., Narasimhadhan, A., Kumar, R.: Automatic method for contrast enhancement of natural color images. J. Electr. Eng. Technol. 10(3), 1233–1243 (2015)CrossRefGoogle Scholar
  8. 8.
    Land, E.H.: An alternative technique for the computation of the designator in the retinex theory of color vision. Proc. Natl. Acad. Sci. USA 83(10), 3078–3080 (1986). http://www.jstor.org/stable/27444CrossRefGoogle Scholar
  9. 9.
    Mittal, A., Moorthy, A.K., Bovik, A.C.: No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 21(12), 4695–4708 (2012)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind" image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012)CrossRefGoogle Scholar
  11. 11.
    Nikolakopoulos, G., Gustafsson, T., Martinsson, P., Andersson, U.: A vision of zero entry production areas in mines**this work has been partially funded by the sustainable mining and innovation for the future research program. In: 4th IFAC Workshop on Mining, Mineral and Metal Processing MMM (2015)Google Scholar
  12. 12.
    Özaslan, T., et al.: Autonomous navigation and mapping for inspection of penstocks and tunnels with MAVs. IEEE Robot. Autom. Lett. 2(3), 1740–1747 (2017)CrossRefGoogle Scholar
  13. 13.
    Petro, A.B., Sbert, C., Morel, J.M.: Multiscale retinex. Image Process. Line 4, 71–88 (2014).  https://doi.org/10.5201/ipol.2014.107CrossRefGoogle Scholar
  14. 14.
    Rahman, Z.U., Jobson, D.J., Woodell, G.A.: Multi-scale retinex for color image enhancement. In: Proceedings of 3rd IEEE International Conference on Image Processing, vol. 3, pp. 1003–1006. IEEE (1996)Google Scholar
  15. 15.
    Ren, X., Li, M., Cheng, W., Liu, J.: Joint enhancement and denoising method via sequential decomposition. CoRR abs/1804.08468 (2018). http://arxiv.org/abs/1804.08468
  16. 16.
    Ren, Y., Ying, Z., Li, T.H., Li, G.: LECARM: low-light image enhancement using the camera response model. IEEE Trans. Circ. Syst. Video Technol. 29(4), 968–981 (2019).  https://doi.org/10.1109/TCSVT.2018.2828141CrossRefGoogle Scholar
  17. 17.
    Saad, M.A., Bovik, A.C.: Blind quality assessment of videos using a model of natural scene statistics and motion coherency. In: 2012 Conference Record of the Forty Sixth Asilomar Conference on Signals, Systems and Computers (ASILOMAR), pp. 332–336. IEEE (2012)Google Scholar
  18. 18.
    Schmid, K., Lutz, P., Tomić, T., Mair, E., Hirschmüller, H.: Autonomous vision-based micro air vehicle for indoor and outdoor navigation. J. Field Robot. 31(4), 537–570 (2014)CrossRefGoogle Scholar
  19. 19.
    Venkatanath, N., Praneeth, D., Bh, M.C., Channappayya, S.S., Medasani, S.S.: Blind image quality evaluation using perception based features. In: 2015 Twenty First National Conference on Communications (NCC), pp. 1–6. IEEE (2015)Google Scholar
  20. 20.
    Ying, Z., Li, G., Gao, W.: A bio-inspired multi-exposure fusion framework for low-light image enhancement. CoRR abs/1711.00591 (2017). http://arxiv.org/abs/1711.00591

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Christoforos Kanellakis
    • 1
    Email author
  • Petros Karvelis
    • 1
  • George Nikolakopoulos
    • 1
  1. 1.Robotics Team, Department of Computer, Electrical and Space EngineeringLuleå University of TechnologyLuleåSweden

Personalised recommendations