Skip to main content

Underwater Light Field Depth Map Restoration Using Deep Convolutional Neural Fields

  • Chapter
  • First Online:
Artificial Intelligence and Robotics

Part of the book series: Studies in Computational Intelligence ((SCI,volume 752))

Abstract

Underwater optical images are usually influenced by low lighting, high turbidity scattering and wavelength absorption. In order to solve these issues, a great deal of work has been used to improve the quality of underwater images. Most of them used the high-intensity LED for lighting to obtain the high contrast images. However, in high turbidity water, high-intensity LED causes strong scattering and absorption. In this paper, we firstly propose a light field imaging approach for solving underwater depth map estimation problems in low-intensity lighting environment. As a solution, we tackle the problem of de-scattering from light field images by using deep convolutional neural fields in depth estimation. Experimental results show the effectiveness of the proposed method through challenging real world underwater imaging.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ligten, R.: Influence of photographic film on wavefront reconstruction. J. Opt. Soc. Am. 56, 1009–1014 (1966)

    Article  Google Scholar 

  2. The Lytro Camera. http://www.lytro.com/

  3. Raytrix: 3D light field camera technology. http://www.raytrix.de/

  4. ProFusion. http://www.viewplus.co.jp/product/camera/profusion25.html/

  5. Dansereau, D., Bongiorno, D., Pizarro, O., Williams, S.: Light field image denoising using a linear 4D frequency-hyperfan all-in-focus filter. In: Proceedings SPIE Computational Imaging XI, Feb 2013

    Google Scholar 

  6. Wilburn, B., Joshi, N., Vaish, V., Talvala, E., Antunez, E., Barth, A., Adams, A., Horowitz, M., Levoy, M.: High performance imaging using large camera arrays. ACM Trans. Graphics 24(3), 765–776 (2005)

    Article  Google Scholar 

  7. Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A., Tumblin, J.: Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing. ACM Trans. Graphics 26(3), 1–10 (2007)

    Article  Google Scholar 

  8. Liang, C., Lin, T., Wong, B., Liu, C., Chen, H.: Programmable aperture photography: multiplexed light field acquisition. ACM Trans. Graphics 27(3), 1–10 (2008)

    Article  Google Scholar 

  9. Taniguchi, Y., Agrawal, A., Veeraraghavan, A., Ramalingam, S., Raskar, R.: Axial-cones: modeling spherical catadioptric cameras for wide-angle light field rendering. ACM Trans. Graphics 29(6), 1–10 (2010)

    Article  Google Scholar 

  10. Ng, R., Levoy, M., Bredif, M., Duval, G., Horowitz, M., Hanrahan, P.: Light field photography with a hand-held plenoptic camera. Stanford University Computer Science and Technical Report, vol. 2, no. 11, 2005

    Google Scholar 

  11. Georgiev, T., Lumsdaine, A.: Reducing plenoptic camera artifacts. In: Computer Graphics Forum, vol. 29, no. 6, pp. 1955–1968 (2010)

    Google Scholar 

  12. Lu, H., Li, B., Zhu, J., Li, Y., Li, Y., Xu, X., He, L., Li, X., Li, J., Serikawa, S.: Wound intensity correction and segmentation with convolutional neural networks. Concurr. Comput. Pract. Exp. 27(9), 1–10 (2017)

    Google Scholar 

  13. Wanner, S., Goldluecke, B.: Globally consistent depth labeling of 4D light fields. In Proceedings of CVPR2012, pp. 41–48 (2012)

    Google Scholar 

  14. Tao, M., Hadap, S., Malik, J., Ramamoorthi, R.: Depth from combining defocus and correspondence using light-field cameras. In Proceedings of IEEE ICCV2013, pp. 673–680 (2013)

    Google Scholar 

  15. Tao, M., Wang, T., Malik, J., Ramamoorthi, R.: Depth estimation for glossy surfaces with light-field cameras. In: Workshop on Light Fields for Computer Vision, ECCV (2014)

    Google Scholar 

  16. Jeon, H., Park, J., Choe, G., Park, J., Bok, Y., Tai, Y., Kweon, I.: Accurate depth map estimation from a lenslet light field camera. In: Proceedings of CVPR2015, pp. 1547–1555 (2015)

    Google Scholar 

  17. Wang, W., Efros, A., Ramamoorthi, R.: Occlusion-aware depth estimation using light-field cameras. In Proceedings of ICCV2015, pp. 3487–3495 (2015)

    Google Scholar 

  18. Williem, W., Park, I.: Robust light field depth estimation for noisy scene with occlusion. In: Proceedings Of CVPR2016, pp. 4396–4404 (2010)

    Google Scholar 

  19. Kalantari, N., Wang, T., Ramamoorthi, R.: Learning-based view synthesis for light field cameras. In Proceedings of SIGGRAPH Asia (2016)

    Google Scholar 

  20. Wang, T., Srikanth, M., Ramamoorthi, R.: Depth from semi-calibrated stereo and defocus. In: Proceedings of CVPR (2016)

    Google Scholar 

  21. Wang, T., Chandraker, M., Efros, A., Ramamoorthi, R.: SVBRDF-invariant shape and reflectance estimation from light-field cameras. In Proceedings of CVPR (2016)

    Google Scholar 

  22. Diebel, J., Thrun, S.: An application of Markov radom fields to range sensing. Adv. Neural. Inf. Process. Syst. 18, 291 (2005)

    Google Scholar 

  23. Huhle, B., Fleck, S., Schilling, A.: Integrating 3D time-of-flight camera data and high resolution images for 3DTV applications. In: Proceedings of 3DTV, pp. 1–4 (2007)

    Google Scholar 

  24. Garro, V., Zanuttigh, P., Cortelazzo, G.: A new super resolution technique for range data. In Proceedings of Associazione Gruppo Telecomunicazionie Tecnologie dell Informazione (2009)

    Google Scholar 

  25. Yang, Q., Tan, K., Culbertson, B., Apostolopoulos, J.: Fusion of active and passive sensors for fast 3D capture. In Proceedings of IEEE International Workshop on Multimedia Signal Processing, pp. 69–74 (2010)

    Google Scholar 

  26. Zhu, J., Wang, L., Gao, J., Yang, R.: Spatial-temporal fusion for high accuracy depth maps using dynamic MRFs. IEEE Trans Pattern Anal. Mach. Intell. 32(5), 899–909 (2010)

    Article  Google Scholar 

  27. He, K., Sun, J., Tang, X.: Guided image filtering. In: Proceedings of ECCV, pp. 1–14 (2010)

    Google Scholar 

  28. Lu, J., Min, D., Pahwa, R., Do, M.: A revisit to MRF-based depth map super-resolution and enhancement. In: Proceedings of IEEE ICASSP, pp. 985–988 (2011)

    Google Scholar 

  29. Park, J., Kim, H., Tai, Y., Brown, M., Kweon, I.: High quality depth map upsampling for 3D-TOF cameras. In: Proceedings of ICCV, pp. 1623–1630 (2011)

    Google Scholar 

  30. Aodha, O., Campbell, N., Nair, A., Brostow, G.: Patch based synthesis for single depth image super-resolution. In: Proceedings of ECCV, pp. 71–84 (2012)

    Google Scholar 

  31. Min, D., Lu, J., Do, M.: Depth video enhancement based on joint global mode filtering. IEEE Trans. Image Process. 21(3), 1176–1190 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  32. Lu, J., Shi, K., Min, D., Lin, L., Do, M.: Cross-based local multipoint filtering. In Proceedings of CVPR, pp. 430–437 (2012)

    Google Scholar 

  33. Ferstl, D., Reinbacher, C., Ranftl, R., Ruther, M., Bischof, H.: Image guided depth upsampling using anisotropic total generalized variation. In: Proceedings of ICCV, pp. 993–1000 (2013)

    Google Scholar 

  34. Liu, M., Tuzel, O., Taguchi, Y.: Joint geodesic upsampling of depth images. In Proceedings of CVPR, pp. 169–176 (2013)

    Google Scholar 

  35. Serikawa, S., Lu, H.: Underwater image dehazing using joint trilateral filter. Comput. Electr. Eng. 40(1), 41–50 (2014)

    Article  Google Scholar 

  36. Lu, H., Li, Y., Zhang, L., Serikawa, S.: Contrast enhancement for images in turbid water. J. Opt. Soc. Am. 32(5), 886–893 (2015)

    Article  Google Scholar 

  37. Liu, F., Shen, C., Lin, G., Reid, I.: Learning depth from single monocular images using deep convolutional neural fields. IEEE Trans. Pattern Anal. Mach. Intell. 38(10), 2024–2038 (2016)

    Article  Google Scholar 

  38. Lu, H., Li, Y., Chen, M., Kim, H., Serikawa, S.: Brain Intelligence: Go Beyond Artificial Intelligence. arXiv:1706.01040 (2017)

  39. Lu, H., Zhang, Y., Li, Y., Zhou, Q., Tadoh, R., Uemura, T., Kim, H., Serikawa, S.: Depth map reconstruction for underwater Kinect camera using inpainting and local image mode filtering. IEEE Access 5(1), 7115–7122 (2017)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by Leading Initiative for Excellent Young Researcher (LEADER) of Ministry of Education, Culture, Sports, Science and Technology-Japan (16809746), Grants-in-Aid for Scientific Research of JSPS (17K14694), Research Fund of Chinese Academy of Sciences (No.MGE2015KG02), Research Fund of State Key Laboratory of Marine Geology in Tongji University (MGK1608), Research Fund of State Key Laboratory of Ocean Engineering in Shanghai Jiaotong University (1315;1510), Research Fund of The Telecommunications Advancement Foundation, and Fundamental Research Developing Association for Shipbuilding and Offshore. We also thank Dr. Donald Dansereau at Stanford University for contributing the imaging equipment setting.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huimin Lu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Lu, H., Li, Y., Kim, H., Serikawa, S. (2018). Underwater Light Field Depth Map Restoration Using Deep Convolutional Neural Fields. In: Lu, H., Xu, X. (eds) Artificial Intelligence and Robotics. Studies in Computational Intelligence, vol 752. Springer, Cham. https://doi.org/10.1007/978-3-319-69877-9_33

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-69877-9_33

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-69876-2

  • Online ISBN: 978-3-319-69877-9

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics