Abstract
Recent years have witnessed a significant growth in Graph Convolutional Networks (GCNs). Being widely applied in a number of tasks, the safety issues of GCNs have draw the attention of many researchers. Recent studies have demonstrated that GCNs are vulnerable to adversarial attacks such that they are easily fooled by deliberate perturbations and a number of attacking methods have been proposed. However, state-of-the-art methods, which incorporate meta learning techniques, suffer from high computational costs. On the other hand, heuristic methods, which excel in efficiency, are in lack of satisfactory attacking performance. In order to solve this problem, it is supposed to find the patterns of gradient-based attacks to improve the performance of heuristic algorithms.
In this paper, we take advantage of several patterns discovered in untargeted attacks to propose a novel heuristic strategy to attack GCNs via creating viscous edges. We introduce the Fast Heuristic Attack (FHA) algorithm, which deliberately links training nodes to nodes of different classes. Instead of linking nodes fully randomly, the algorithm picks a batch of training nodes, which are of the same type, and links them to another class each time. Experimental studies show that our proposed method is able to achieve competitive attacking performance when attacking against various GCN models while significantly outperforming Mettack, which is the state-of-the-art untargeted structure attack, in terms of runtime.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bioglio, L., Rho, V., Pensa, R.: Measuring the inspiration rate of topics in bibliographic networks. In: DS (2017)
Bruna, J., Zaremba, W., Szlam, A.D., LeCun, Y.: Spectral networks and locally connected networks on graphs. CoRR abs/1312.6203 (2014)
Chen, J.J., Ma, T., Xiao, C.: FastGCN: fast learning with graph convolutional networks via importance sampling. arXiv:1801.10247 (2018)
Defferrard, M., Bresson, X., Vandergheynst, P.: Convolutional neural networks on graphs with fast localized spectral filtering. In: NIPS (2016)
Entezari, N., Al-Sayouri, S.A., Darvishzadeh, A., Papalexakis, E.E.: All you need is low (rank): defending against adversarial attacks on graphs. In: Proceedings of the 13th International Conference on Web Search and Data Mining (2020)
Giarelis, N., Kanakaris, N., Karacapilidis, N.: On the utilization of structural and textual information of a scientific knowledge graph to discover future research collaborations: a link prediction perspective. In: DS (2020)
Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. CoRR abs/1412.6572 (2015)
Jin, W., Ma, Y., Liu, X., Tang, X.F., Wang, S., Tang, J.: Graph structure learning for robust graph neural networks. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (2020)
Jin, W., Li, Y., Xu, H., Wang, Y., Tang, J.: Adversarial attacks and defenses on graphs: a review and empirical study. arXiv:2003.00653 (2020)
Kapoor, A., et al.: Examining COVID-19 forecasting using spatio-temporal graph neural networks. arXiv:2007.03113 (2020)
Kipf, T., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv:abs/1609.02907 (2017)
Klicpera, J., Bojchevski, A., Günnemann, S.: Predict then propagate: graph neural networks meet personalized pagerank. In: ICLR (2019)
Levie, R., Monti, F., Bresson, X., Bronstein, M.: CayleyNets: graph convolutional neural networks with complex rational spectral filters. IEEE Trans. Sig. Process. 67, 97–109 (2019)
Li, Y., Jin, W., Xu, H., Tang, J.: DeepRobust: a Pytorch library for adversarial attacks and defenses. arXiv:abs/2005.06149 (2020)
Lin, X., et al.: Exploratory adversarial attacks on graph neural networks. In: 2020 IEEE International Conference on Data Mining (ICDM), pp. 1136–1141 (2020)
Shumovskaia, V., Fedyanin, K., Sukharev, I., Berestnev, D., Panov, M.: Linking bank clients using graph neural networks powered by rich transactional data: Extended abstract. In: 2020 IEEE 7th International Conference on Data Science and Advanced Analytics (DSAA), pp. 787–788 (2020)
Sun, L., et al.: Adversarial attack and defense on graph data: a survey. arXiv preprint arXiv:1812.10528 (2018)
Velickovic, P., Cucurull, G., Casanova, A., Romero, A., Lio’, P., Bengio, Y.: Graph attention networks. arXiv:1710.10903 (2018)
Wang, J., Luo, M., Suya, F., Li, J., Yang, Z., Zheng, Q.: Scalable attack on graph data by injecting vicious nodes. Data Min. Knowl. Disc. 34(5), 1363–1389 (2020). https://doi.org/10.1007/s10618-020-00696-7
Wang, X., Liu, X., Hsieh, C.J.: GraphDefense: towards robust graph convolutional networks. arXiv:1911.04429 (2019)
Wasserman, S., Faust, K.: Social Network Analysis - Methods and Applications. Structural Analysis in the Social Sciences (2007)
Wu, H., Wang, C., Tyshetskiy, Y., Docherty, A., Lu, K., Zhu, L.: Adversarial examples for graph data: deep insights into attack and defense. In: IJCAI (2019)
Xu, K., et al.: Topology attack and defense for graph neural networks: an optimization perspective. arXiv:1906.04214 (2019)
Xu, K., Li, C., Tian, Y., Sonobe, T., Kawarabayashi, K., Jegelka, S.: Representation learning on graphs with jumping knowledge networks. In: ICML (2018)
Zhan, H., Pei, X.: Black-box gradient attack on graph neural networks: deeper insights in graph-based attack and defense. arXiv:2104.15061 (2021)
Zhang, M., Hu, L., Shi, C., Wang, X.: Adversarial label-flipping attack and defense for graph neural networks. In: 2020 IEEE International Conference on Data Mining (ICDM), pp. 791–800 (2020)
Zhu, D., Zhang, Z., Cui, P., Zhu, W.: Robust graph convolutional networks against adversarial attacks. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (2019)
Zugner, D., Gunnemann, S.: Adversarial attacks on graph neural networks via meta learning (2019)
Zügner, D., Akbarnejad, A., Günnemann, S.: Adversarial attacks on neural networks for graph data. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (2018)
Zügner, D., Borchert, O., Akbarnejad, A., Günnemann, S.: Adversarial attacks on graph neural networks: perturbations and their patterns. ACM Trans. Knowl. Disc. Data (TKDD) 14, 1–31 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhan, H., Pei, X. (2021). FHA: Fast Heuristic Attack Against Graph Convolutional Networks. In: Soares, C., Torgo, L. (eds) Discovery Science. DS 2021. Lecture Notes in Computer Science(), vol 12986. Springer, Cham. https://doi.org/10.1007/978-3-030-88942-5_12
Download citation
DOI: https://doi.org/10.1007/978-3-030-88942-5_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-88941-8
Online ISBN: 978-3-030-88942-5
eBook Packages: Computer ScienceComputer Science (R0)