Advertisement

Adversarial Edit Attacks for Tree Data

  • Benjamin PaaßenEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11871)

Abstract

Many machine learning models can be attacked with adversarial examples, i.e. inputs close to correctly classified examples that are classified incorrectly. However, most research on adversarial attacks to date is limited to vectorial data, in particular image data. In this contribution, we extend the field by introducing adversarial edit attacks for tree-structured data with potential applications in medicine and automated program analysis. Our approach solely relies on the tree edit distance and a logarithmic number of black-box queries to the attacked classifier without any need for gradient information.

We evaluate our approach on two programming and two biomedical data sets and show that many established tree classifiers, like tree-kernel-SVMs and recursive neural networks, can be attacked effectively.

Keywords

Adversarial attacks Tree edit distance Structured data Tree kernels Recursive neural networks Tree echo state networks 

References

  1. 1.
    Aiolli, F., Da San Martino, G., Sperduti, A.: Extending tree kernels with topological information. In: Proceedings of ICANN, pp. 142–149 (2011)Google Scholar
  2. 2.
    Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018)CrossRefGoogle Scholar
  3. 3.
    Bille, P.: A survey on tree edit distance and related problems. Theor. Comput. Sci. 337(1), 217–239 (2005)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: Proceedings of IEEE Security and Privacy, pp. 39–57 (2017)Google Scholar
  5. 5.
    Carlini, N., Wagner, D.: Audio adversarial examples: targeted attacks on speech-to-text. In: Proceedings of SPW, pp. 1–7 (2018)Google Scholar
  6. 6.
    Dai, H., et al.: Adversarial attack on graph structured data. In: Proceedings of ICML, pp. 1115–1124 (2018)Google Scholar
  7. 7.
    Ebrahimi, J., Rao, A., Lowd, D., Dou, D.: HotFlip: white-box adversarial examples for text classification. In: Proceedings of ACL, pp. 31–36 (2018)Google Scholar
  8. 8.
    Eykholt, K., et al.: Robust physical-world attacks on deep learning visual classification. In: Proceedings of CVPR, pp. 1625–1634 (2018)Google Scholar
  9. 9.
    Gallicchio, C., Micheli, A.: Tree echo state networks. Neurocomputing 101, 319–337 (2013)CrossRefGoogle Scholar
  10. 10.
    Gisbrecht, A., Schleif, F.M.: Metric and non-metric proximity transformations at linear costs. Neurocomputing 167, 643–657 (2015)CrossRefGoogle Scholar
  11. 11.
    Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: Proceedings of ICLR (2015)Google Scholar
  12. 12.
    Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: Proceedings of ICLR (2018)Google Scholar
  13. 13.
    Paaßen, B.: Revisiting the tree edit distance and its backtracing: a tutorial. CoRR abs/1805.06869 (2018)Google Scholar
  14. 14.
    Paaßen, B., Gallicchio, C., Micheli, A., Hammer, B.: Tree edit distance learning via adaptive symbol embeddings. In: Proceedings of ICML, pp. 3973–3982 (2018)Google Scholar
  15. 15.
    Sperduti, A., Starita, A.: Supervised neural networks for the classification of structures. IEEE Trans. Neural Networks 8(3), 714–735 (1997)CrossRefGoogle Scholar
  16. 16.
    Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. CoRR abs/1710.08864 (2017)Google Scholar
  17. 17.
    Szegedy, C., et al.: Intriguing properties of neural networks. In: Proceedings of ICLR (2014)Google Scholar
  18. 18.
    Zhang, K., Shasha, D.: Simple fast algorithms for the editing distance between trees and related problems. SIAM J. Comput. 18(6), 1245–1262 (1989)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Zügner, D., Akbarnejad, A., Günnemann, S.: Adversarial attacks on neural networks for graph data. In: Proceedings of SIGKDD, pp. 2847–2856 (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.CITECBielefeld UniversityBielefeldGermany

Personalised recommendations