Advertisement

3D Research

, 9:58 | Cite as

Hybrid Multi-level Regularizations with Sparse Representation for Single Depth Map Super-Resolution

  • Doaa A. Altantawy
  • Ahmed I. Saleh
  • Sherif S. Kishk
3DR Express
  • 85 Downloads
Part of the following topical collections:
  1. Real 3D Imaging & Display

Abstract

Limited spatial resolution and varieties of degradations are the main restrictions of today’s captured depth map by active 3D sensing devices. Typical restrictions limit the direct use of the obtained depth maps in most of 3D applications. In this paper, we present a single depth map upsampling approach in contrast to the common work of using the corresponding combined color image to guide the upsampling process. The proposed approach employs a multi-level decomposition to convert the depth upsampling process to a classification-based problem via a multi-level classification-based learning algorithm. Hence, the lost high frequency details can be better preserved at different levels. The adopted multi-level decomposition algorithm utilizes \(l_{1} ,\) and \(l_{0}\) sparse regularization with total-variation regularization to keep structure- and edge-preserving smoothing with robustness to noisy degradations. In addition, the proposed classification-based learning algorithm supports the accuracy of discrimination by learning discriminative dictionaries that carry original features about each class and learning common shared dictionaries that represent the shared features between classes. The proposed algorithm has been validated via different experiments under variety of degradations using different datasets from different sensing devices. Results show superiority to the state of the art, especially in case of upsampling noisy low-resolution depth maps.

Keywords

Single depth super-resolution Multi-level decomposition \(l_{1} ,\) and \(l_{0}\) sparse regularization Multi-level classification-based learning Total-variation regularization 

Notes

Compliance with Ethical Standards

Conflict of interest

The authors certify that they have NO affiliations with or involvement in any organization or entity with any financial interest (such as honoraria; educational grants; participation in speakers’ bureaus; membership, employment, consultancies, stock ownership, or other equity interest; and expert testimony or patent-licensing arrangements), or non-financial interest (such as personal or professional relationships, affiliations, knowledge or beliefs) in the subject matter or materials discussed in this manuscript.

References

  1. 1.
    Altantawy, D. A., Saleh, A. I., & Kishk, S. (2017). Single depth map super-resolution: A self-structured sparsity representation with non-local total variation technique. In Proceedings of the IEEE conference on intelligent computing and information systems (ICICIS) (pp. 43–50).Google Scholar
  2. 2.
    Aujol, J. F., Gilboa, G., Chan, T., & Osher, S. (2006). Structure-texture image decomposition—Modeling, algorithms, and parameter selection. International Journal of Computer Vision, 67(1), 111–136.zbMATHCrossRefGoogle Scholar
  3. 3.
    Boyd, S., Parikh, N., Chu, E., Peleato, B., & Eckstein, J. (2011). Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine Learning, 3(1), 1–122.zbMATHCrossRefGoogle Scholar
  4. 4.
    Dong, C., Loy, C. C., He, K., & Tang, X. (2016). Image super-resolution using deep convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(2), 295–307.CrossRefGoogle Scholar
  5. 5.
    Diebel, J., & Thrun, S. (2006). An application of markov random fields to range sensing. In Advances in neural information processing systems (pp. 291–298).Google Scholar
  6. 6.
    Eichhardt, I., Chetverikov, D., & Jankó, Z. (2017). Image-guided ToF depth upsampling: A survey. Machine Vision and Applications, 28(3–4), 267–282.CrossRefGoogle Scholar
  7. 7.
    Ferstl, D., Ruther, M., & Bischof, H. (2015). Variational depth superresolution using example-based edge representations. In Proceedings of the IEEE international conference on computer vision (ICCV) (pp. 513–521).Google Scholar
  8. 8.
    Ferstl, D., Reinbacher, C., Ranftl, R., Rüther, M., & Bischof, H. (2013). Image guided depth upsampling using anisotropic total generalized variation. In Proceedings of the IEEE international conference on computer Vision (ICCV) (pp. 993–1000).Google Scholar
  9. 9.
    Freeman, W. T., Jones, T. R., & Pasztor, E. C. (2002). Example-based super-resolution. IEEE Computer Graphics and Applications, 22(2), 56–65.CrossRefGoogle Scholar
  10. 10.
    Gong, X., Ren, J., Lai, B., Yan, C., & Qian, H. (2014). Guided depth upsampling via a cosparse analysis model. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops (pp. 724–731).Google Scholar
  11. 11.
    Gilboa, G., Sochen, N., & Zeevi, Y. Y. (2004). Image enhancement and denoising by complex diffusion processes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(8), 1020–1036.CrossRefGoogle Scholar
  12. 12.
    Hui, T. W., Loy, C. C., & Tang, X. (2016). Depth map super-resolution by deep multi-scale guidance. In European conference on computer vision (pp. 353–369). Cham: Springer.CrossRefGoogle Scholar
  13. 13.
    Huang, J. B., Singh, A., & Ahuja, N. (2015). Single image super-resolution from transformed self-exemplars. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5197–5206).Google Scholar
  14. 14.
    He, K., Sun, J., & Tang, X. (2013). Guided image filtering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6, 1397–1409.CrossRefGoogle Scholar
  15. 15.
    Hornácek, M., Rhemann, C., Gelautz, M., & Rother, C. (2013). Depth super resolution by rigid body self-similarity in 3d. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1123–1130).Google Scholar
  16. 16.
    Kong, S., & Wang, D. (2012). A dictionary learning approach for classification: Separating the particularity and the commonality. In European conference on computer vision (pp. 186–199). Berlin: Springer.CrossRefGoogle Scholar
  17. 17.
    Kopf, J., Cohen, M. F., Lischinski, D., & Uyttendaele, M. (2007). Joint bilateral upsampling. ACM Transactions on Graphics (ToG), 26(3), 96.CrossRefGoogle Scholar
  18. 18.
    Liang, Z., Xu, J., Zhang, D., Cao, Z., & Zhang, L. (2018). A hybrid l1-l0 layer decomposition model for tone mapping. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4758–4766).Google Scholar
  19. 19.
    Liu, W., Chen, X., Yang, J., & Wu, Q. (2017). Robust color guided depth map restoration. IEEE Transactions on Image Processing, 26(1), 315–327.MathSciNetCrossRefGoogle Scholar
  20. 20.
    Lu, X., Guo, Y., Liu, N., Wan, L., & Fang, T. (2017). Non-convex joint bilateral guided depth upsampling. Multimedia Tools and Applications, 77(12), 15521–15544.CrossRefGoogle Scholar
  21. 21.
    Liu, W., & Li, S. (2014). Sparse representation with morphologic regularizations for single image super-resolution. Signal Processing, 98, 410–422.CrossRefGoogle Scholar
  22. 22.
    Liu, M. Y., Tuzel, O., & Taguchi, Y. (2013). Joint geodesic upsampling of depth images. In Proceedings of the IEEE conference on computer vision and pattern recognition (169–176).Google Scholar
  23. 23.
    Li, Y., Xue, T., Sun, L., & Liu, J. (2012). Joint example-based depth map super-resolution. In Proceedings of the IEEE conference on multimedia and expo (ICME) (pp. 152–157).Google Scholar
  24. 24.
    Lijun, S., ZhiYun, X., & Hua, H. (2010). Image super-resolution based on MCA and wavelet-domain HMT. IEEE International Forum on Information Technology and Applications (IFITA), 2, 264–269.Google Scholar
  25. 25.
    Middleburry datasets: http://vision.middlebury.edu/stereo/data/. Accessed November 12, 2018.
  26. 26.
    Mac Aodha, O., Campbell, N. D., Nair, A., & Brostow, G. J. (2012). Rigid synthesis for single depth image super-resolution. In European conference on computer vision (pp. 71–84). Berlin: Springer.Google Scholar
  27. 27.
    Nguyen, R. M., & Brown, M. S. (2015). Fast and effective L0 gradient minimization by region fusion. In Proceedings of the IEEE international conference on computer vision (pp. 208–216).Google Scholar
  28. 28.
    Park, J., Kim, H., Tai, Y. W., Brown, M. S., & Kweon, I. (2011). High quality depth map upsampling for 3d-tof cameras. In Proceedings of the IEEE conference in computer vision (ICCV) (pp. 1623–1630).Google Scholar
  29. 29.
    Riegler, G., Rüther, M., & Bischof, H. (2016). Atgv-net: Accurate depth super-resolution. In European conference on computer vision (pp. 268–284). Cham: Springer.CrossRefGoogle Scholar
  30. 30.
    Shabaninia, E., Naghsh-Nilchi, A. R., & Kasaei, S. (2017). High-order Markov random field for single depth image super-resolution. IET Computer Vision, 11(8), 683–690.CrossRefGoogle Scholar
  31. 31.
    Song, X., Dai, Y., & Qin, X. (2016) Deep depth super-resolution: Learning depth super-resolution using deep convolutional neural network. In Asian conference on computer vision (pp. 360–376). Cham: Springer.CrossRefGoogle Scholar
  32. 32.
    Timofte, R., Smet, V., & Gool, L. (2013). Anchored neighborhood regression for fast example-based super-resolution. In Proceedings of the IEEE conference in computer vision (ICCV) (pp. 1920–1927).Google Scholar
  33. 33.
    Unger, M., Pock, T., Werlberger, M., & Bischof, H. (2010). A convex approach for variational super-resolution. In Joint pattern recognition symposium (pp. 313–322). Berlin: Springer.Google Scholar
  34. 34.
    Xie, J., Feris, R. S., & Sun, M. T. (2016). Edge-guided single depth image super resolution. IEEE Transactions on Image Processing, 25(1), 428–438.MathSciNetCrossRefGoogle Scholar
  35. 35.
    Xie, J., Chou, C. C., Feris, R., & Sun, M. T. (2014). Single depth image super resolution and denoising via coupled dictionary learning with local constraints and shock filtering. In Proceedings of the IEEE conference on multimedia and expo (ICME) (pp. 1–6).Google Scholar
  36. 36.
    Xu, L., Lu, C., Xu, Y., & Jia, J. (2011). Image smoothing via L 0 gradient minimization. ACM Transactions on Graphics (TOG), 30(6), 174.Google Scholar
  37. 37.
    Yang, J., Ye, X., Li, K., Hou, C., & Wang, Y. (2014). Color-guided depth recovery from RGB-D data using an adaptive autoregressive model. IEEE Transactions on Image Processing, 23(8), 3443–3458.MathSciNetzbMATHCrossRefGoogle Scholar
  38. 38.
    Yang, J., Wright, J., Huang, T. S., & Ma, Y. (2010). Image super-resolution via sparse representation. IEEE Transactions on Image Processing, 19(11), 2861–2873.MathSciNetzbMATHCrossRefGoogle Scholar
  39. 39.
    Yang, M., Zhang, L., Yang, J., & Zhang, D. (2010). Metaface learning for sparse representation based face recognition. In 17th IEEE international conference on image processing (ICIP), 2010 (pp. 1601–1604). IEEE.Google Scholar
  40. 40.
    Zhao, F., Cao, Z., Xiao, Y., Zhang, X., Xian, K., & Li, R. (2017). Depth image super-resolution via semi self-taught learning framework. In Videometrics, range imaging, and applications XIV, 10332: 103320R. International Society for Optics and Photonics.Google Scholar
  41. 41.
    Zhou, W., Li, X., & Reynolds, D. (2017). Guided deep network for depth map super-resolution: How much can color help? In Proceedings of the IEEE conference in acoustics, speech and signal processing (ICASSP) (pp. 1457–1461).Google Scholar
  42. 42.
    Zhang, Y., Zhang, Y., & Dai, Q. (2015) Single depth image super resolution via a dual sparsity model. In Proceedings of the IEEE conference in multimedia & expo workshops (ICMEW) (pp. 1–6).Google Scholar
  43. 43.
    Zhang, Z., Xu, Y., Yang, J., Li, X., & Zhang, D. (2015). A survey of sparse representation: Algorithms and applications. IEEE Access, 3, 490–530.CrossRefGoogle Scholar
  44. 44.
    Zeyde, R., Elad, M., & Protter, M. (2010) On single image scale-up using sparse-representations. In International conference on curves and surfaces (pp. 711–730). Berlin: Springer.CrossRefGoogle Scholar

Copyright information

© 3D Research Center, Kwangwoon University and Springer-Verlag GmbH Germany, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Electronics and Communications Engineering Department, Faculty of EngineeringMansoura UniversityMansouraEgypt
  2. 2.Computer Engineering and Systems Department, Faculty of EngineeringMansoura UniversityMansouraEgypt

Personalised recommendations