Advertisement

Fully-Convolutional Point Networks for Large-Scale Point Clouds

  • Dario RethageEmail author
  • Johanna WaldEmail author
  • Jürgen SturmEmail author
  • Nassir NavabEmail author
  • Federico TombariEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11208)

Abstract

This work proposes a general-purpose, fully-convolutional network architecture for efficiently processing large-scale 3D data. One striking characteristic of our approach is its ability to process unorganized 3D representations such as point clouds as input, then transforming them internally to ordered structures to be processed via 3D convolutions. In contrast to conventional approaches that maintain either unorganized or organized representations, from input to output, our approach has the advantage of operating on memory efficient input data representations while at the same time exploiting the natural structure of convolutional operations to avoid the redundant computing and storing of spatial information in the network. The network eliminates the need to pre- or post process the raw sensor data. This, together with the fully-convolutional nature of the network, makes it an end-to-end method able to process point clouds of huge spaces or even entire rooms with up to 200k points at once. Another advantage is that our network can produce either an ordered output or map predictions directly onto the input cloud, thus making it suitable as a general-purpose point cloud descriptor applicable to many 3D tasks. We demonstrate our network’s ability to effectively learn both low-level features as well as complex compositional relationships by evaluating it on benchmark datasets for semantic voxel segmentation, semantic part segmentation and 3D scene captioning.

Keywords

Point clouds 3D deep learning Scene understanding Fully-convolutional Semantic segmentation 3D captioning 

Supplementary material

474208_1_En_37_MOESM1_ESM.pdf (675 kb)
Supplementary material 1 (pdf 674 KB)

Supplementary material 2 (m4v 33777 KB)

References

  1. 1.
    Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T., Nießner, M.: ScanNet: richly-annotated 3d reconstructions of indoor scenes (2017)Google Scholar
  2. 2.
    Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3d classification and segmentation. In: Proceedings of the Computer Vision and Pattern Recognition (CVPR). IEEE (2017)Google Scholar
  3. 3.
    Zeng, A., Song, S., Nießner, M., Fisher, M., Xiao, J., Funkhouser, T.: 3DMatch: learning local geometric descriptors from RGB-D reconstructions. In: CVPR (2017)Google Scholar
  4. 4.
    Shelhamer, E., Long, J., Darrell, T.: Fully convolutional networks for semantic segmentation. In: PAMI (2016)Google Scholar
  5. 5.
    Song, S., Yu, F., Zeng, A., Chang, A.X., Savva, M., Funkhouser, T.: Semantic scene completion from a single depth image. In: Proceedings of 30th IEEE Conference on Computer Vision and Pattern Recognition (2017)Google Scholar
  6. 6.
    Klokov, R., Lempitsky, V.: Escape from cells: deep kd-networks for the recognition of 3d point cloud models. In: 2017 IEEE International Conference on Computer Vision (ICCV) (2017)Google Scholar
  7. 7.
    Riegler, G., Ulusoy, O., Geiger, A.: OctNet: learning deep 3d representations at high resolutions. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)Google Scholar
  8. 8.
    Qi, C.R., Yi, L., Su, H., Guibas, L.J.: Pointnet++: deep hierarchical feature learning on point sets in a metric space. arXiv preprint arXiv:1706.02413 (2017)
  9. 9.
    Manessi, F., Rozza, A., Manzo, M.: Dynamic graph convolutional networks (2018). https://arxiv.org/abs/1704.06199
  10. 10.
    Su, H., Maji, S., Kalogerakis, E., Learned-Miller, E.G.: Multi-view convolutional neural networks for 3d shape recognition. In: Proceedings of the ICCV (2015)Google Scholar
  11. 11.
    Qi, C.R., Su, H., Nießner, M., Dai, A., Yan, M., Guibas, L.: Volumetric and multi-view CNNs for object classification on 3d data. In: Proceedings of the Computer Vision and Pattern Recognition (CVPR). IEEE (2016)Google Scholar
  12. 12.
    He, K., Gkioxari, G., Dollár, P., Girshick, R.B.: Mask R-CNN. In: ICCV. IEEE Computer Society (2017)Google Scholar
  13. 13.
    Maturana, D., Scherer, S.: VoxNet: a 3d convolutional neural network for real-time object recognition. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (2015)Google Scholar
  14. 14.
    Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46723-8_49CrossRefGoogle Scholar
  15. 15.
    Milletari, F., Navab, N., Ahmadi, S.: V-Net: fully convolutional neural networks for volumetric medical image segmentation. CoRR (2016)Google Scholar
  16. 16.
    Izadi, S., et al.: KinectFusion: real-time 3d reconstruction and interaction using a moving depth camera. In: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (2011)Google Scholar
  17. 17.
    Wu, Z., et al.: 3d ShapeNets: a deep representation for volumetric shapes. In: CVPR. IEEE Computer Society (2015)Google Scholar
  18. 18.
    Dai, A., Ritchie, D., Bokeloh, M., Reed, S., Sturm, J., Nießner, M.: ScanComplete: large-scale scene completion and semantic segmentation for 3d scans. In: Proceedings of the Computer Vision and Pattern Recognition (CVPR). IEEE (2018)Google Scholar
  19. 19.
    Yi, L., et al.: A scalable active framework for region annotation in 3d shape collections. In: SIGGRAPH Asia (2016)CrossRefGoogle Scholar
  20. 20.
    Li, Y., Pirk, S., Su, H., Qi, C.R., Guibas, L.J.: FPNN: field probing neural networks for 3d data. In: NIPS (2016)Google Scholar
  21. 21.
    Yi, L., Su, H., Guo, X., Guibas, L.: SyncSpecCNN: synchronized spectral CNN for 3d shape segmentation. In: CVPR (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Technical University of MunichMunichGermany
  2. 2.GoogleMunichGermany

Personalised recommendations