Advertisement

A Non-negative Factorization Approach to Node Pooling in Graph Convolutional Neural Networks

  • Davide BacciuEmail author
  • Luigi Di Sotto
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11946)

Abstract

The paper discusses a pooling mechanism to induce subsampling in graph structured data and introduces it as a component of a graph convolutional neural network. The pooling mechanism builds on the Non-Negative Matrix Factorization (NMF) of a matrix representing node adjacency and node similarity as adaptively obtained through the vertices embedding learned by the model. Such mechanism is applied to obtain an incrementally coarser graph where nodes are adaptively pooled into communities based on the outcomes of the non-negative factorization. The empirical analysis on graph classification benchmarks shows how such coarsening process yields significant improvements in the predictive performance of the model with respect to its non-pooled counterpart.

Keywords

Graph Convolutional Neural Networks Differentiable graph pooling Non-Negative Matrix Factorization 

Notes

Acknowledgments

This work has been supported by the Italian Ministry of Education, University, and Research (MIUR) under project SIR 2014 LIST-IT (grant n. RBSI14STDE).

References

  1. 1.
    Bacciu, D., Micheli, A., Sperduti, A.: Compositional generative mapping for tree-structured data - part II: topographic projection model. IEEE Trans. Neural Netw. Learn. Syst. 24(2), 231–247 (2013)CrossRefGoogle Scholar
  2. 2.
    Bacciu, D., Micheli, A., Sperduti, A.: Generative kernels for tree-structured data. IEEE Trans. Neural Netw. Learn. Syst. 29(10), 4932–4946 (2018)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Bacciu, D., Errica, F., Micheli, A.: Contextual graph Markov model: a deep and generative approach to graph processing. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning, vol. 80, pp. 294–303. Proceedings of Machine Learning Research, PMLR, Stockholmsmässan, Stockholm (2018)Google Scholar
  4. 4.
    Belkin, M., Niyogi, P.: Laplacian eigenmaps and spectral techniques for embedding and clustering. In: Proceedings of the 14th International Conference on Neural Information Processing Systems: Natural and Synthetic, NIPS 2001, pp. 585–591. MIT Press, Cambridge (2001)Google Scholar
  5. 5.
    Bianchi, F.M., Grattarola, D., Livi, L., Alippi, C.: Graph neural networks with convolutional ARMA filters. CoRR abs/1901.01343 (2019)Google Scholar
  6. 6.
    Borgwardt, K.M., Ong, C.S., Schonauer, S., Vishwanathan, S.V.N., Smola, A.J., Kriegel, H.P.: Protein function prediction via graph kernels. Bioinformatics 21(Suppl 1), i47–i56 (2005)CrossRefGoogle Scholar
  7. 7.
    Bronstein, M.M., Bruna, J., LeCun, Y., Szlam, A., Vandergheynst, P.: Geometric deep learning: going beyond euclidean data. CoRR abs/1611.08097 (2016)Google Scholar
  8. 8.
    Brouwer, T., Frellsen, J., Liò, P.: Fast Bayesian non-negative matrix factorisation and tri-factorisation. In: NIPS 2016 : Advances in Approximate Bayesian Inference Workshop, 09 December 2016 (2016)Google Scholar
  9. 9.
    Bruna, J., Zaremba, W., Szlam, A., Lecun, Y.: Spectral networks and locally connected networks on graphs. In: International Conference on Learning Representations (ICLR 2014), CBLS, April 2014 (2014)Google Scholar
  10. 10.
    Cangea, C., Veličković, P., Jovanović, N., Kipf, T., Liò, P.: Towards sparse hierarchical graph classifiers. arXiv e-prints arXiv:1811.01287, November 2018
  11. 11.
    Defferrard, M., Bresson, X., Vandergheynst, P.: Convolutional neural networks on graphs with fast localized spectral filtering. CoRR abs/1606.09375 (2016)Google Scholar
  12. 12.
    Dobson, P.D., Doig, A.J.: Distinguishing enzyme structures from non-enzymes without alignments. J. Mol. Biol. 330(4), 771–783 (2003)CrossRefGoogle Scholar
  13. 13.
    Favati, P., Lotti, G., Menchi, O., Romani, F.: Adaptive computation of the symmetric nonnegative matrix factorization (NMF). arXiv e-prints arXiv:1903.01321, March 2019
  14. 14.
    Fey, M., Lenssen, J.E.: Fast graph representation learning with PyTorch geometric. CoRR abs/1903.02428 (2019)Google Scholar
  15. 15.
    Hamilton, W.L., Ying, R., Leskovec, J.: Inductive representation learning on large graphs. CoRR abs/1706.02216 (2017)Google Scholar
  16. 16.
    Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv e-prints arXiv:1312.6114, December 2013
  17. 17.
    Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. CoRR abs/1609.02907 (2016)Google Scholar
  18. 18.
    Kipf, T.N., Welling, M.: Variational graph auto-encoders. arXiv e-prints arXiv:1611.07308, November 2016
  19. 19.
    Lee, D.D., Seung, H.S.: Algorithms for non-negative matrix factorization. In: Leen, T.K., Dietterich, T.G., Tresp, V. (eds.) Advances in Neural Information Processing Systems, vol. 13, pp. 556–562. MIT Press (2001)Google Scholar
  20. 20.
    Mathieu, E., Le Lan, C., Maddison, C.J., Tomioka, R., Whye Teh, Y.: Hierarchical representations with Poincaré Variational auto-encoders. arXiv e-prints arXiv:1901.06033, January 2019
  21. 21.
    Mescheder, L.M., Nowozin, S., Geiger, A.: Adversarial variational bayes: unifying variational autoencoders and generative adversarial networks. CoRR abs/1701.04722 (2017)Google Scholar
  22. 22.
    Micheli, A.: Neural network for graphs: a contextual constructive approach. IEEE Trans. Neural Netw. 20(3), 498–511 (2009)MathSciNetCrossRefGoogle Scholar
  23. 23.
    Pan, S., Hu, R., Long, G., Jiang, J., Yao, L., Zhang, C.: Adversarially regularized graph autoencoder. CoRR abs/1802.04407 (2018)Google Scholar
  24. 24.
    Rezende, D.J., Mohamed, S., Wierstra, D.: Stochastic backpropagation and approximate inference in deep generative models. In: Xing, E.P., Jebara, T. (eds.) Proceedings of the 31st International Conference on Machine Learning, vol. 32, pp. 1278–1286. Proceedings of Machine Learning Research, PMLR, Bejing, 22–24 June 2014Google Scholar
  25. 25.
    Schomburg, I., et al.: BRENDA, the enzyme database: updates and major new developments. Nucleic Acids Res. 32, D431–D433 (2004).  https://doi.org/10.1093/nar/gkh081CrossRefGoogle Scholar
  26. 26.
    Shervashidze, N., Schweitzer, P., van Leeuwen, E.J., Mehlhorn, K., Borgwardt, K.M.: Weisfeiler-lehman graph kernels. J. Mach. Learn. Res. 12, 2539–2561 (2011)MathSciNetzbMATHGoogle Scholar
  27. 27.
    Wale, N., Watson, I.A., Karypis, G.: Comparison of descriptor spaces for chemical compound retrieval and classification. Knowl. Inf. Syst. 14(3), 347–375 (2008).  https://doi.org/10.1007/s10115-007-0103-5CrossRefGoogle Scholar
  28. 28.
    Watt, J., Borhani, R., Katsaggelos, A.K.: Machine Learning Refined: Foundations, Algorithms, and Applicationsa, 1st edn. Cambridge University Press, New York (2016)CrossRefGoogle Scholar
  29. 29.
    Xu, K., Hu, W., Leskovec, J., Jegelka, S.: How powerful are graph neural networks? CoRR abs/1810.00826 (2018)Google Scholar
  30. 30.
    Yanardag, P., Vishwanathan, S.: Deep graph kernels. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2015, pp. 1365–1374. ACM, New York (2015).  https://doi.org/10.1145/2783258.2783417
  31. 31.
    Ying, Z., You, J., Morris, C., Ren, X., Hamilton, W., Leskovec, J.: Hierarchical graph representation learning with differentiable pooling. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 31, pp. 4804–4814. Curran Associates, Inc. (2018)Google Scholar
  32. 32.
    Yu, K., Yu, S., Tresp, V.: Soft clustering on graphs. In: Weiss, Y., Schölkopf, B., Platt, J.C. (eds.) Advances in Neural Information Processing Systems, vol. 18, pp. 1553–1560. MIT Press (2006). http://papers.nips.cc/paper/2948-soft-clustering-on-graphs.pdf

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Dipartimento di InformaticaUniversità di PisaPisaItaly

Personalised recommendations