Depth Image Super Resolution for 3D Reconstruction of Oil Reflnery Buildings

  • Shuaihao Li
  • Bin Zhang
  • Xinfeng Yang
  • Yanxiang He
  • Min ChenEmail author

Time-of-Flight (ToF) camera can collect the depth data of dynamic scene surface in real time, which has been applied to 3D reconstruction of refinery buildings. However; due to the limitations of sensor hardware, the resolution of the depth image obtained is very low, so it cannot meet the requirements of dense depth of scene in practical applications such as 3D reconstruction. Therefore, it is necessary to make a breakthrough in software and design a good algorithm to improve the resolution of depth image. We propose of an algorithm of depth image super-resolution by using fusion of multiple progressive convolution neural networks, which uses a context-based network fusion framework to fuse multiple different progressive networks, so as to improve individual network performance and efficiency while maintaining the simplicity of network training. Finally, we have carried out experiments on the public data set, and the experimental results show that the proposed algorithm has reached or even exceeded the most advanced algorithms at present.


depth image 3D reconstruction super resolution progressive convolution neural network oil refinery 



This study was funded by the Nature Science Foundation of China (Grant 61373039), China National Mapping and Geographic Information Bureau Engineering Technology Research Center (ID: SID770170101) and Research Center for Culture-Technology Integration Innovation, Key Research Base of Humanities and Social Science of Province, China (ID: WK201704).


  1. 1.
    R. A. Newcombe, S. Izadi, O. Hilliges et al., KinectFusion: “Real-time dense surface mapping and tracking”. IEEE International Symposium on Mixed & Augmented Reality. 2012.Google Scholar
  2. 2.
    Li Shuaihao, Li Qianqian, ChenMin et al., Chem. Technol. Fuels Oils, 2018,54(5), 613-624.Google Scholar
  3. 3.
    K. He, J. Sun, X. Tang “Guided image filtering.” In Proc. Eur.Coryl Comput. Vis. (ECCV), 2010, pp. 1-14.Google Scholar
  4. 4.
    D. Rajan, S. Chaudhuri, Image and Vision Computing, 2001,19(13):957-969.CrossRefGoogle Scholar
  5. 5.
    G. Riegler, M. Blither, H. Bischol, “ATGV-Net: Accurate depth super-resolution. ” In: Proc. Eur. Conf. Comput. Vis., 2016, pp. 268-284.Google Scholar
  6. 6.
    C. Dong, C.C. Loy, K. He et al. "Learning a deep convolutional network for image super-resolution." In: Proceedings of the European Conference on Computer Vision (ECCV) (2014).Google Scholar
  7. 7.
    H. Ren, M. El-Khamy, J. Lee, "Image Super Resolution Based on Fusing Multiple Convolution Neural Networks." Computer Vision & Pattern Recognition Workshops. IEEE, 2017.Google Scholar
  8. 8.
    Dong C., Loy C.C., Tang X. "Accelerating the super-resolution convolutional neural network." In: Proceedings of the European Conference on Computer Vision (ECCV) (2016).Google Scholar
  9. 9.
    Scharstein D., Pal C. "Learning conditional random fields for stereo." In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2007), Minneapolis, h.4, June 2007.Google Scholar
  10. 10.
    Jia Y., Shelhamer E., Donahue J. et al. "Caffe: Convolutional Architecture for Fast Feature Embedding." Proceedings of the 22nd ACM International Conference on Multimedia, 2014:675-678.Google Scholar
  11. 11.
    F. Garcia, D. Aouada, T. Solignac et al., IET Comput. Vis., vol. 7, no. 5, pp. 335-345,2013.CrossRefGoogle Scholar
  12. 12.
    F. Garcia, D. Aouada, B. Mirbach, IEEE J Sal Topics Signal Process., vol. 6, no. 5, pp. 425-436, Sep. 2012.Google Scholar
  13. 13.
    J. Yang , X. Ye, K. Li et al., IEEE Trans. Image Process., vol. 23, no. 8, pp. 3443-3458, Aug. 2014.Google Scholar
  14. 14.
    F. Garcia, D. Aouada, B. Mirbach et al., Image Vis. Comput., vol 41, pp. 26-41, Sep. 2015.Google Scholar
  15. 15.
    M.Y. Liu, O. Tuzel, Y. Taguchi, "Joint geodesic upsampling of depth images." In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2013, pp. 169-176.Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  • Shuaihao Li
    • 1
  • Bin Zhang
    • 2
    • 3
  • Xinfeng Yang
    • 1
  • Yanxiang He
    • 1
  • Min Chen
    • 1
    Email author
  1. 1.School of Computer ScienceWuhan UniversityWuhanChina
  2. 2.School of Remote Sensing and Information EngineeringWuhan UniversityWuhanChina
  3. 3.Department of Computer ScienceCity University of Hong KongKowloon TongChina

Personalised recommendations