Towards a Stable Graph Representation Learning Using Connection Subgraphs
This chapter studies the problem of learning large-scale graph representations (a.k.a. embeddings) that encode the relationships among distinct nodes. The learned representations generalize over various tasks, such as node classification, link prediction, and recommendation. Learning node representations aims to map proximate nodes close to one another in a low-dimensional vector space. Thus, embedding algorithms pursue to preserve local and global network structure by identifying node neighborhoods. However, many existing algorithms generate embeddings that fail to preserve the network structure and are unstable. That is, the embeddings yield from multiple runs on the same graph are different. Therefore, on one side of the spectrum, such algorithms seem to be suitable for single graph-related tasks, like node classification; however, on the other side, these algorithms cannot fit multi-graph problems.
In this chapter, we propose a novel stable graph representation learning using connection subgraphs (GRCS) algorithmic framework, which aims to learn graph representations using connection subgraphs, where analogy with electrical circuits has been employed. The connection subgraphs are known to be very beneficial in different real-world networks, such as social networks, biological networks, citation networks, co-authorship networks, terrorism networks, and others, as they address the proximity among each two non-adjacent nodes, which are abundant in real-world networks, by maximizing the amount of flow between them. Although a subgraph captures proximity between two non-adjacent nodes, the formation of the subgraph accounts for connections with immediate neighbors as well. In addition, using connection subgraphs, we address the issues of high-degree nodes, and take advantage of weak ties and use the meta-data that have been neglected by embedding baseline algorithms.
We demonstrate the efficacy and robustness of GRCS over existing representation learning algorithms on a node classification task using data sets from various domains. GRCS is robust to noise; its performance is either as good as or better than that of the state-of-the-art algorithms.
- Bayati M, Gerritsen M, Gleich DF, Saberi A, Wang Y (2009) Algorithms for large, sparse network alignment problems. In: Data mining, 2009. ICDM’09. Ninth IEEE international conference on. IEEE, Washington, pp. 705–710Google Scholar
- Belkin M, Niyogi P (2002) Laplacian eigenmaps and spectral techniques for embedding and clustering. In: Advances in neural information processing systems. MIT Press, Cambridge, pp. 585–591Google Scholar
- Cao S, Lu W, Xu Q (2016) Deep neural networks for learning graph representations. In: Association for the advancement of artificial intelligence. AAAI Press, California, pp. 1145–1152Google Scholar
- Faloutsos C, McCurley KS, Tomkins A (2004) Fast discovery of connection subgraphs. In: Proceedings of the tenth ACM SIGKDD international conference on knowledge discovery and data mining. ACM, New York, pp. 118–127Google Scholar
- Goyal P, Ferrara E (2017) Graph embedding techniques, applications, and performance: a survey. arXiv preprint arXiv:1705.02801Google Scholar
- Heimann M, Koutra D (2017) On generalizing neural node embedding methods to multi-network problems. ACM, New YorkGoogle Scholar
- Le Q, Mikolov T (2014) Distributed representations of sentences and documents. In: Proceedings of the 31st international conference on machine learning (ICML-14). Beijing, pp. 1188–1196Google Scholar
- Mahoney M (2011) Large text compression benchmarkGoogle Scholar
- Mikolov T, Chen K, Corrado G, Dean J (2013) Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781Google Scholar
- Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J (2013) Distributed representations of words and phrases and their compositionality. In: Advances in neural information processing systems. Curran Associates Inc., New York, pp. 3111–3119Google Scholar
- Perozzi B, Al-Rfou R, Skiena S (2014) Deepwalk: online learning of social representations. In: Proceedings of the 20th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, New York, pp. 701–710Google Scholar
- Perozzi B, Kulkarni V, Skiena S (2016) Walklets: multiscale graph embeddings for interpretable network classification. arXiv preprint arXiv:1605.02115Google Scholar
- Rossi RA, Zhou R, Ahmed NK (2017) Deep feature learning for graphs. arXiv preprint arXiv:1704.08829Google Scholar
- Sen P, Namata G, Bilgic M, Getoor L, Galligher B, Eliassi-Rad T (2008) Collective classification in network data. Artif. Intell. Mag. 29(3):93Google Scholar