Advertisement

Fast Omnidirectional Depth Densification

  • Hyeonjoong Jang
  • Daniel S. Jeon
  • Hyunho Ha
  • Min H. KimEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11844)

Abstract

Omnidirectional cameras are commonly equipped with fisheye lenses to capture 360-degree visual information, and severe spherical projective distortion occurs when a 360-degree image is stored as a two-dimensional image array. As a consequence, traditional depth estimation methods are not directly applicable to omnidirectional cameras. Dense depth estimation for omnidirectional imaging has been achieved by applying several offline processes, such as patch-matching, optical flow, and convolutional propagation filtering, resulting in additional heavy computation. No dense depth estimation for real-time applications is available yet. In response, we propose an efficient depth densification method designed for omnidirectional imaging to achieve 360-degree dense depth video with an omnidirectional camera. First, we compute the sparse depth estimates using a conventional simultaneous localization and mapping (SLAM) method, and then use these estimates as input to a depth densification method. We propose a novel densification method using the spherical pull-push method by devising a joint spherical pyramid for color and depth, based on multi-level icosahedron subdivision surfaces. This allows us to propagate the sparse depth continuously over 360-degree angles efficiently in an edge-aware manner. The results demonstrate that our real-time densification method is comparable to state-of-the-art offline methods in terms of per-pixel depth accuracy. Combining our depth densification with a conventional SLAM allows us to capture real-time 360-degree RGB-D video with a single omnidirectional camera.

Keywords

Omnidirectional stereo 3D imaging Depth densification 

Notes

Acknowledgements

Min H. Kim acknowledges Korea NRF grants (2019R1A2C3007229, 2013M3A6A-6073718) and additional support by Cross-Ministry Giga KOREA Project (GK17-P0200), Samsung Electronics (SRFC-IT1402-02), ETRI (19ZR1400), and an ICT R&D program of MSIT/IITP of Korea (2016-0-00018).

Supplementary material

491108_1_En_53_MOESM1_ESM.pdf (1.3 mb)
Supplementary material 1 (pdf 1370 KB)

Supplementary material 2 (mp4 57316 KB)

References

  1. 1.
    Barron, J.T., Poole, B.: The fast bilateral solver. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 617–632. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46487-9_38CrossRefGoogle Scholar
  2. 2.
    Bunschoten, R., Krose, B.: Robust scene reconstruction from an omnidirectional vision system. IEEE Trans. Robot. Autom. 19(2), 351–357 (2003)CrossRefGoogle Scholar
  3. 3.
    Caruso, D., Engel, J., Cremers, D.: Large-scale direct slam for omnidirectional cameras. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 141–148. IEEE (2015)Google Scholar
  4. 4.
    Chen, Z., Badrinarayanan, V., Drozdov, G., Rabinovich, A.: Estimating depth from RGB and sparse sensing. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 167–182 (2018)Google Scholar
  5. 5.
    Engel, J., Koltun, V., Cremers, D.: Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 611–625 (2018)CrossRefGoogle Scholar
  6. 6.
    Gortler, S.J., Grzeszczuk, R., Szeliski, R., Cohen, M.F.: The lumigraph. Siggraph 96, 43–54 (1996)Google Scholar
  7. 7.
    Guan, H., Smith, W.A.: Brisks: binary features for spherical images on a geodesic grid. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4516–4524 (2017)Google Scholar
  8. 8.
    Hawe, S., Kleinsteuber, M., Diepold, K.: Dense disparity maps from sparse disparity measurements. In: 2011 International Conference on Computer Vision, pp. 2126–2133. IEEE (2011)Google Scholar
  9. 9.
    He, K., Sun, J., Tang, X.: Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013)CrossRefGoogle Scholar
  10. 10.
    Holynski, A., Kopf, J.: Fast depth densification for occlusion-aware augmented reality. In: SIGGRAPH Asia 2018 Technical Papers, p. 194. ACM (2018)Google Scholar
  11. 11.
    Huang, J., Chen, Z., Ceylan, D., Jin, H.: 6-DOF VR videos with a single 360-camera. In: 2017 IEEE Virtual Reality (VR), pp. 37–44. IEEE (2017)Google Scholar
  12. 12.
    Im, S., Ha, H., Rameau, F., Jeon, H.-G., Choe, G., Kweon, I.S.: All-around depth from small motion with a spherical panoramic camera. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 156–172. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46487-9_10CrossRefGoogle Scholar
  13. 13.
    Jaritz, M., De Charette, R., Wirbel, E., Perrotton, X., Nashashibi, F.: Sparse and dense data with CNNs: depth completion and semantic segmentation. In: 2018 International Conference on 3D Vision (3DV), pp. 52–60. IEEE (2018)Google Scholar
  14. 14.
    Kopf, J., Cohen, M.F., Lischinski, D., Uyttendaele, M.: Joint bilateral upsampling. In: ACM Transactions on Graphics (ToG), vol. 26, p. 96. ACM (2007)Google Scholar
  15. 15.
    Levin, A., Lischinski, D., Weiss, Y.: Colorization using optimization. In: ACM Transactions on Graphics (ToG), vol. 23, pp. 689–694. ACM (2004)Google Scholar
  16. 16.
    Li, S.: Binocular spherical stereo. IEEE Trans. Intell. Transp. Syst. 9(4), 589–600 (2008)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Li, S., Fukumori, K.: Spherical stereo for the construction of immersive VR environment. In: IEEE Proceedings, VR 2005, Virtual Reality, pp. 217–222. IEEE (2005)Google Scholar
  18. 18.
    Lin, H.S., Chang, C.C., Chang, H.Y., Chuang, Y.Y., Lin, T.L., Ouhyoung, M.: A low-cost portable polycamera for stereoscopic \(360^{\circ }\) imaging. IEEE Trans. Circ. Syst. Video Technol. 29(4), 915–929 (2018)Google Scholar
  19. 19.
    Mal, F., Karaman, S.: Sparse-to-dense: depth prediction from sparse depth samples and a single image. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–8. IEEE (2018)Google Scholar
  20. 20.
    Muja, M., Lowe, D.G.: Scalable nearest neighbor algorithms for high dimensional data. IEEE Trans. Pattern Anal. Mach. Intell. 36(11), 2227–2240 (2014)CrossRefGoogle Scholar
  21. 21.
    Shen, S.: Accurate multiple view 3D reconstruction using patch-based stereo for large-scale scenes. IEEE Trans. Image Process. 22(5), 1901–1914 (2013)MathSciNetCrossRefGoogle Scholar
  22. 22.
    Shirley, P., Chiu, K.: A low distortion map between disk and square. J. Graph. Tools 2(3), 45–52 (1997)CrossRefGoogle Scholar
  23. 23.
    Zhao, Q., Feng, W., Wan, L., Zhang, J.: SPHORB: a fast and robust binary feature on the sphere. Int. J. Comput. Vision 113(2), 143–159 (2015)MathSciNetCrossRefGoogle Scholar
  24. 24.
    Zhu, Z.: Omnidirectional stereo vision. In: Proceedings of the Workshop on Omnidirectional Vision, Budapest, Hungary (2001)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Hyeonjoong Jang
    • 1
  • Daniel S. Jeon
    • 1
  • Hyunho Ha
    • 1
  • Min H. Kim
    • 1
    Email author
  1. 1.KAIST School of ComputingDaejeonKorea

Personalised recommendations