Towards a Stable Graph Representation Learning Using Connection Subgraphs

  • Saba A. Al-Sayouri
  • Sarah S. LamEmail author
Part of the Women in Engineering and Science book series (WES)


This chapter studies the problem of learning large-scale graph representations (a.k.a. embeddings) that encode the relationships among distinct nodes. The learned representations generalize over various tasks, such as node classification, link prediction, and recommendation. Learning node representations aims to map proximate nodes close to one another in a low-dimensional vector space. Thus, embedding algorithms pursue to preserve local and global network structure by identifying node neighborhoods. However, many existing algorithms generate embeddings that fail to preserve the network structure and are unstable. That is, the embeddings yield from multiple runs on the same graph are different. Therefore, on one side of the spectrum, such algorithms seem to be suitable for single graph-related tasks, like node classification; however, on the other side, these algorithms cannot fit multi-graph problems.

In this chapter, we propose a novel stable graph representation learning using connection subgraphs (GRCS) algorithmic framework, which aims to learn graph representations using connection subgraphs, where analogy with electrical circuits has been employed. The connection subgraphs are known to be very beneficial in different real-world networks, such as social networks, biological networks, citation networks, co-authorship networks, terrorism networks, and others, as they address the proximity among each two non-adjacent nodes, which are abundant in real-world networks, by maximizing the amount of flow between them. Although a subgraph captures proximity between two non-adjacent nodes, the formation of the subgraph accounts for connections with immediate neighbors as well. In addition, using connection subgraphs, we address the issues of high-degree nodes, and take advantage of weak ties and use the meta-data that have been neglected by embedding baseline algorithms.

We demonstrate the efficacy and robustness of GRCS over existing representation learning algorithms on a node classification task using data sets from various domains. GRCS is robust to noise; its performance is either as good as or better than that of the state-of-the-art algorithms.


  1. Ahmed A, Shervashidze N, Narayanamurthy S, Josifovski V, Smola AJ (2013) Distributed large-scale natural graph factorization. In: Proceedings of the 22nd international conference on world wide web. ACM, New York, pp. 37–48CrossRefGoogle Scholar
  2. Bayati M, Gerritsen M, Gleich DF, Saberi A, Wang Y (2009) Algorithms for large, sparse network alignment problems. In: Data mining, 2009. ICDM’09. Ninth IEEE international conference on. IEEE, Washington, pp. 705–710Google Scholar
  3. Belkin M, Niyogi P (2002) Laplacian eigenmaps and spectral techniques for embedding and clustering. In: Advances in neural information processing systems. MIT Press, Cambridge, pp. 585–591Google Scholar
  4. Bengio Y, Courville A, Vincent P (2013) Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell 35(8):1798–1828CrossRefGoogle Scholar
  5. Breitkreutz BJ, Stark C, Reguly T, Boucher L, Breitkreutz A, Livstone M, Oughtred R, Lackner DH, Bähler J, Wood V et al (2007) The biogrid interaction database: 2008 update. Nucleic Acids Res 36(suppl 1):D637–D640CrossRefGoogle Scholar
  6. Cao S, Lu W, Xu Q (2016) Deep neural networks for learning graph representations. In: Association for the advancement of artificial intelligence. AAAI Press, California, pp. 1145–1152Google Scholar
  7. Easley D, Kleinberg J (2010) Networks, crowds, and markets: reasoning about a highly connected world. Cambridge University Press, CambridgeCrossRefGoogle Scholar
  8. Fallani FDV, Richiardi J, Chavez M, Achard S (2014) Graph analysis of functional brain networks: practical issues in translational neuroscience. Phil Trans R Soc B 369(1653):20130521CrossRefGoogle Scholar
  9. Faloutsos C, McCurley KS, Tomkins A (2004) Fast discovery of connection subgraphs. In: Proceedings of the tenth ACM SIGKDD international conference on knowledge discovery and data mining. ACM, New York, pp. 118–127Google Scholar
  10. Fouss F, Pirotte A, Renders JM, Saerens M (2007) Random-walk computation of similarities between nodes of a graph with application to collaborative recommendation. IEEE Trans Knowl Data Eng 19(3):355–369CrossRefGoogle Scholar
  11. Goyal P, Ferrara E (2017) Graph embedding techniques, applications, and performance: a survey. arXiv preprint arXiv:1705.02801Google Scholar
  12. Grover A, Leskovec J (2016) node2vec: Scalable feature learning for networks. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, New York, pp. 855–864CrossRefGoogle Scholar
  13. Heimann M, Koutra D (2017) On generalizing neural node embedding methods to multi-network problems. ACM, New YorkGoogle Scholar
  14. Koutra D, Vogelstein JT, Faloutsos C (2013) Deltacon: a principled massive-graph similarity function. In: Proceedings of the 2013 SIAM international conference on data mining. SIAM, Philadelphia, pp. 162–170CrossRefGoogle Scholar
  15. Le Q, Mikolov T (2014) Distributed representations of sentences and documents. In: Proceedings of the 31st international conference on machine learning (ICML-14). Beijing, pp. 1188–1196Google Scholar
  16. Mahoney M (2011) Large text compression benchmarkGoogle Scholar
  17. Mikolov T, Chen K, Corrado G, Dean J (2013) Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781Google Scholar
  18. Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J (2013) Distributed representations of words and phrases and their compositionality. In: Advances in neural information processing systems. Curran Associates Inc., New York, pp. 3111–3119Google Scholar
  19. Newman ME (2005) A measure of betweenness centrality based on random walks. Soc Networks 27(1):39–54CrossRefGoogle Scholar
  20. Perozzi B, Al-Rfou R, Skiena S (2014) Deepwalk: online learning of social representations. In: Proceedings of the 20th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, New York, pp. 701–710Google Scholar
  21. Perozzi B, Kulkarni V, Skiena S (2016) Walklets: multiscale graph embeddings for interpretable network classification. arXiv preprint arXiv:1605.02115Google Scholar
  22. Rossi RA, Zhou R, Ahmed NK (2017) Deep feature learning for graphs. arXiv preprint arXiv:1704.08829Google Scholar
  23. Roweis ST, Saul LK (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500):2323–2326CrossRefGoogle Scholar
  24. Sen P, Namata G, Bilgic M, Getoor L, Galligher B, Eliassi-Rad T (2008) Collective classification in network data. Artif. Intell. Mag. 29(3):93Google Scholar
  25. Tang J, Qu M, Wang M, Zhang M, Yan J, Mei Q (2015) Line: large-scale information network embedding. In: Proceedings of the 24th international conference on world wide web. ACM, New York, pp. 1067–1077CrossRefGoogle Scholar
  26. Tang L, Wang X, Liu H (2012) Scalable learning of collective behavior. IEEE Trans Knowl Data Eng 24(6):1080–1091CrossRefGoogle Scholar
  27. Wang D, Cui P, Zhu W (2016) Structural deep network embedding. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, New York, pp. 1225–1234CrossRefGoogle Scholar
  28. Zhou R, Hansen EA (2006) Breadth-first heuristic search. Artif Intell 170(4–5):385–408MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Binghamton UniversityBinghamtonUSA

Personalised recommendations