Advertisement

GAL: Geometric Adversarial Loss for Single-View 3D-Object Reconstruction

  • Li JiangEmail author
  • Shaoshuai Shi
  • Xiaojuan Qi
  • Jiaya Jia
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11212)

Abstract

In this paper, we present a framework for reconstructing a point-based 3D model of an object from a single-view image. We found distance metrics, like Chamfer distance, were used in previous work to measure the difference of two point sets and serve as the loss function in point-based reconstruction. However, such point-point loss does not constrain the 3D model from a global perspective. We propose adding geometric adversarial loss (GAL). It is composed of two terms where the geometric loss ensures consistent shape of reconstructed 3D models close to ground-truth from different viewpoints, and the conditional adversarial loss generates a semantically-meaningful point cloud. GAL benefits predicting the obscured part of objects and maintaining geometric structure of the predicted 3D model. Both the qualitative results and quantitative analysis manifest the generality and suitability of our method.

Keywords

3D Reconstruction Adversarial loss Geometric consistency Point cloud 3D Neural network 

Supplementary material

Supplementary material 1 (mp4 17254 KB)

474213_1_En_49_MOESM2_ESM.pdf (26.5 mb)
Supplementary material 2 (pdf 27133 KB)

References

  1. 1.
    Broadhurst, A., Drummond, T.W., Cipolla, R.: A probabilistic framework for space carving. In: ICCV (2001)Google Scholar
  2. 2.
    Chang, A.X., et al.: Shapenet: An information-rich 3d model repository (2015). arXiv:1512.03012
  3. 3.
    Choy, C.B., Xu, D., Gwak, J., Chen, K., Savarese, S.: 3D-R2N2: A unified approach for single and multi-view 3D object reconstruction. In: ECCV (2016)Google Scholar
  4. 4.
    Fan, H., Su, H., Guibas, L.J.: A point set generation network for 3D object reconstruction from a single image. In: CVPR (2017)Google Scholar
  5. 5.
    Fuentes-Pacheco, J., Ruiz-Ascencio, J., Rendón-Mancha, J.M.: Visual simultaneous localization and mapping: a survey. Artificial Intelligence Review (2015)Google Scholar
  6. 6.
    Goodfellow, I., et al.: Generative adversarial nets. In: NIPS (2014)Google Scholar
  7. 7.
    Gwak, J., Choy, C.B., Chandraker, M., Garg, A., Savarese, S.: Weakly supervised 3D reconstruction with adversarial constraint. In: CVPR (2017)Google Scholar
  8. 8.
    Häming, K., Peters, G.: The structure-from-motion reconstruction pipeline-a survey with focus on short image sequences. Kybernetika (2010)Google Scholar
  9. 9.
    Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: CVPR (2017). arXiv:1611.07004
  10. 10.
    Laurentini, A.: The visual hull concept for silhouette-based image understanding. PAMI 16(2), 150–162 (1994)CrossRefGoogle Scholar
  11. 11.
    Liu, S., Cooper, D.B.: Ray Markov random fields for image-based 3D modeling: model and efficient inference. In: CVPR (2010)Google Scholar
  12. 12.
    Lu, Y., Tai, Y.W., Tang, C.K.: Conditional cyclegan for attribute guided face image generation. In: CVPR (2017). arXiv:1705.09966
  13. 13.
    Matusik, W., Buehler, C., Raskar, R., Gortler, S.J., McMillan, L.: Image-based visual hulls. In: Proceedings of the 27th annual conference on Computer graphics and interactive techniques (2000)Google Scholar
  14. 14.
    Mirza, M., Osindero, S.: Conditional generative adversarial nets (2014). arXiv:1411.1784
  15. 15.
    Newell, A., Yang, K., Deng, J.: Stacked hourglass networks for human pose estimation. In: ECCV (2016) arXiv:1603.06937
  16. 16.
    Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: Deep learning on point sets for 3D classification and segmentation (2017). arXiv:1612.00593
  17. 17.
    Qi, C.R., Yi, L., Su, H., Guibas, L.J.: PointNet++: Deep hierarchical feature learning on point sets in a metric space. In: NIPS (2017). arXiv:1706.02413
  18. 18.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). arXiv:1409.1556
  19. 19.
    Tatarchenko, M., Dosovitskiy, A., Brox, T.: Multi-view 3D models from single images with a convolutional network. In: ECCV (2016). arXiv:1511.06702
  20. 20.
    Tulsiani, S., Efros, A.A., Malik, J.: Multi-view consistency as supervisory signal for learning shape and pose prediction (2018). arXiv:1801.03910
  21. 21.
    Yang, B., Wen, H., Wang, S., Clark, R., Markham, A., Trigoni, N.: 3D object reconstruction from a single depth view with adversarial learning (2017). arXiv:1708.07969
  22. 22.
    Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks (2017). arXiv:1703.10593

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Li Jiang
    • 1
    Email author
  • Shaoshuai Shi
    • 1
  • Xiaojuan Qi
    • 1
  • Jiaya Jia
    • 1
    • 2
  1. 1.The Chinese University of Hong KongHong KongChina
  2. 2.Tencent YouTu LabShenzhenChina

Personalised recommendations