A Simple Approach to Intrinsic Correspondence Learning on Unstructured 3D Meshes

  • Isaak LimEmail author
  • Alexander Dielen
  • Marcel Campen
  • Leif Kobbelt
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11131)


The question of representation of 3D geometry is of vital importance when it comes to leveraging the recent advances in the field of machine learning for geometry processing tasks. For common unstructured surface meshes state-of-the-art methods rely on patch-based or mapping-based techniques that introduce resampling operations in order to encode neighborhood information in a structured and regular manner. We investigate whether such resampling can be avoided, and propose a simple and direct encoding approach. It does not only increase processing efficiency due to its simplicity – its direct nature also avoids any loss in data fidelity. To evaluate the proposed method, we perform a number of experiments in the challenging domain of intrinsic, non-rigid shape correspondence estimation. In comparisons to current methods we observe that our approach is able to achieve highly competitive results.


Shape correspondence estimation Learning on graphs 



The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement n\(^\circ \) [340884]. We would like to thank the authors of related work [3, 17] for making their implementations available, as well as the reviewers for their insightful comments.


  1. 1.
    Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous systems (2015)., software available from
  2. 2.
    Bogo, F., Romero, J., Loper, M., Black, M.J.: FAUST: dataset and evaluation for 3D mesh registration. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Piscataway, NJ, USA, June 2014 (2014)Google Scholar
  3. 3.
    Boscaini, D., Masci, J., Rodolà, E., Bronstein, M.: Learning shape correspondence with anisotropic convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 3189–3197 (2016)Google Scholar
  4. 4.
    Defferrard, M., Bresson, X., Vandergheynst, P.: Convolutional neural networks on graphs with fast localized spectral filtering. In: Advances in Neural Information Processing Systems, pp. 3844–3852 (2016)Google Scholar
  5. 5.
    Eynard, D., Kovnatsky, A., Bronstein, M.M., Glashoff, K., Bronstein, A.M.: Multimodal manifold analysis by simultaneous diagonalization of laplacians. IEEE Trans. Pattern Anal. Mach. Intell. 37(12), 2505–2517 (2015)CrossRefGoogle Scholar
  6. 6.
    Eynard, D., Rodola, E., Glashoff, K., Bronstein, M.M.: Coupled functional maps. In: 2016 Fourth International Conference on 3D Vision (3DV), pp. 399–407. IEEE (2016)Google Scholar
  7. 7.
    Ezuz, D., Solomon, J., Kim, V.G., Ben-Chen, M.: GWCNN: a metric alignment layer for deep shape analysis. In: Computer Graphics Forum, vol. 36, pp. 49–57. Wiley Online Library (2017)Google Scholar
  8. 8.
    Gehre, A., Bronstein, M., Kobbelt, L., Solomon, J.: Interactive curve constrained functional maps. Comput. Graph. Forum 37(5), 1–12 (2018)CrossRefGoogle Scholar
  9. 9.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  10. 10.
    Huang, Q., Wang, F., Guibas, L.: Functional map networks for analyzing and exploring large shape collections. ACM Trans. Graph. (TOG) 33(4), 36 (2014)zbMATHGoogle Scholar
  11. 11.
    Kim, V.G., Lipman, Y., Funkhouser, T.: Blended intrinsic maps. ACM Trans. Graph. (TOG) 30, 79 (2011)Google Scholar
  12. 12.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  13. 13.
    Kostrikov, I., Jiang, Z., Panozzo, D., Zorin, D., Bruna, J.: Surface networks. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018 (2018)Google Scholar
  14. 14.
    Kovnatsky, A., Bronstein, M.M., Bronstein, A.M., Glashoff, K., Kimmel, R.: Coupled quasi-harmonic bases. In: Computer Graphics Forum, vol. 32, pp. 439–448. Wiley Online Library (2013)Google Scholar
  15. 15.
    Litany, O., Remez, T., Rodola, E., Bronstein, A.M., Bronstein, M.M.: Deep functional maps: structured prediction for dense shape correspondence. In: Proceedings of ICCV, vol. 2, p. 8 (2017)Google Scholar
  16. 16.
    Maron, H., et al.: Convolutional neural networks on surfaces via seamless toric covers. ACM Trans. Graph 36(4), 71 (2017)CrossRefGoogle Scholar
  17. 17.
    Masci, J., Boscaini, D., Bronstein, M., Vandergheynst, P.: Geodesic convolutional neural networks on riemannian manifolds. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 37–45 (2015)Google Scholar
  18. 18.
    Monti, F., Boscaini, D., Masci, J., Rodola, E., Svoboda, J., Bronstein, M.M.: Geometric deep learning on graphs and manifolds using mixture model CNNs. In: Proceedings of CVPR, vol. 1, p. 3 (2017)Google Scholar
  19. 19.
    Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: Proceedings of the 27th International Conference on Machine Learning (ICML 2010), pp. 807–814 (2010)Google Scholar
  20. 20.
    Niepert, M., Ahmed, M., Kutzkov, K.: Learning convolutional neural networks for graphs. In: International Conference on Machine Learning, pp. 2014–2023 (2016)Google Scholar
  21. 21.
    Nogneng, D., Melzi, S., Rodolà, E., Castellani, U., Bronstein, M., Ovsjanikov, M.: Improved functional mappings via product preservation. In: Computer Graphics Forum, vol. 37, pp. 179–190. Wiley Online Library (2018)Google Scholar
  22. 22.
    Nogneng, D., Ovsjanikov, M.: Informative descriptor preservation via commutativity for shape matching. In: Computer Graphics Forum, vol. 36, pp. 259–267. Wiley Online Library (2017)Google Scholar
  23. 23.
    Ovsjanikov, M., Ben-Chen, M., Solomon, J., Butscher, A., Guibas, L.: Functional maps: a flexible representation of maps between shapes. ACM Trans. Graph. (TOG) 31(4), 30 (2012)CrossRefGoogle Scholar
  24. 24.
    Pokrass, J., Bronstein, A.M., Bronstein, M.M., Sprechmann, P., Sapiro, G.: Sparse modeling of intrinsic correspondences. In: Computer Graphics Forum, vol. 32, pp. 459–468. Wiley Online Library (2013)Google Scholar
  25. 25.
    Rodolà, E., Cosmo, L., Bronstein, M.M., Torsello, A., Cremers, D.: Partial functional correspondence. In: Computer Graphics Forum, vol. 36, pp. 222–236. Wiley Online Library (2017)Google Scholar
  26. 26.
    Salti, S., Tombari, F., Di Stefano, L.: SHOT: unique signatures of histograms for surface and texture description. Comput. Vis. Image Underst. 125, 251–264 (2014)CrossRefGoogle Scholar
  27. 27.
    Scarselli, F., Gori, M., Tsoi, A.C., Hagenbuchner, M., Monfardini, G.: The graph neural network model. IEEE Trans. Neural Netw. 20(1), 61–80 (2009)CrossRefGoogle Scholar
  28. 28.
    Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetzbMATHGoogle Scholar
  29. 29.
    Van Kaick, O., Zhang, H., Hamarneh, G., Cohen-Or, D.: A survey on shape correspondence. In: Computer Graphics Forum, vol. 30, pp. 1681–1707. Wiley Online Library (2011)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Isaak Lim
    • 1
    Email author
  • Alexander Dielen
    • 1
  • Marcel Campen
    • 2
  • Leif Kobbelt
    • 1
  1. 1.Visual Computing InstituteRWTH Aachen UniversityAachenGermany
  2. 2.Osnabrück UniversityOsnabrückGermany

Personalised recommendations