Advertisement

A Randomized Gradient-Free Attack on ReLU Networks

  • Francesco CroceEmail author
  • Matthias Hein
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11269)

Abstract

It has recently been shown that neural networks but also other classifiers are vulnerable to so called adversarial attacks e.g. in object recognition an almost non-perceivable change of the image changes the decision of the classifier. Relatively fast heuristics have been proposed to produce these adversarial inputs but the problem of finding the optimal adversarial input, that is with the minimal change of the input, is NP-hard. While methods based on mixed-integer optimization which find the optimal adversarial input have been developed, they do not scale to large networks. Currently, the attack scheme proposed by Carlini and Wagner is considered to produce the best adversarial inputs. In this paper we propose a new attack scheme for the class of ReLU networks based on a direct optimization on the resulting linear regions. In our experimental validation we improve in all except one experiment out of 18 over the Carlini-Wagner attack with a relative improvement of up to 9%. As our approach is based on the geometrical structure of ReLU networks, it is less susceptible to defences targeting their functional properties.

Keywords

Adversarial manipulation Robustness of classifiers 

References

  1. 1.
    Arora, R., Basuy, A., Mianjyz, P., Mukherjee, A.: Understanding deep neural networks with rectified linear unit. In: ICLR (2018)Google Scholar
  2. 2.
    Athalye, A., Carlini, N., Wagner, D.A.: Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. arXiv:1802.00420 (2018)
  3. 3.
    Carlini, N., Katz, G., Barrett, C., Dill, D.L.: Provably minimally-distorted adversarial examples. arXiv:1709.10207v2 (2017)
  4. 4.
    Carlini, N., Wagner, D.: Adversarial examples are not easily detected: bypassing ten detection methods. In: ACM Workshop on Artificial Intelligence and Security (2017)Google Scholar
  5. 5.
    Carlini, N., Wagner, D.A.: Towards evaluating the robustness of neural networks. In: IEEE Symposium on Security and Privacy, pp. 39–57 (2017)Google Scholar
  6. 6.
    Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015)Google Scholar
  7. 7.
    Gurobi Optimization, Inc.: Gurobi optimizer reference manual (2016). http://www.gurobi.com
  8. 8.
    Hein, M., Andriushchenko, M.: Formal guarantees on the robustness of a classifier against adversarial manipulation. In: NIPS (2017)Google Scholar
  9. 9.
    Huang, R., Xu, B., Schuurmans, D., Szepesvari, C.: Learning with a strong adversary. In: ICLR (2016)Google Scholar
  10. 10.
    Katz, G., Barrett, C., Dill, D., Julian, K., Kochenderfer, M.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: CAV (2017)Google Scholar
  11. 11.
    Krizhevsky, A., Nair, V., Hinton, G.: CIFAR-10 (Canadian Institute for Advanced Research). http://www.cs.toronto.edu/~kriz/cifar.html
  12. 12.
    Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial examples in the physical world. In: ICLR Workshop (2017)Google Scholar
  13. 13.
    Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks. In: ICLR (2017)Google Scholar
  14. 14.
    Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Valdu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018)Google Scholar
  15. 15.
    Montufar, G., Pascanu, R., Cho, K., Bengio, Y.: On the number of linear regions of deep neural networks. In: NIPS (2014)Google Scholar
  16. 16.
    Moosavi-Dezfooli, S., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations. In: CVPR (2017)Google Scholar
  17. 17.
    Papernot, N., et al.: CleverHans v2.0.0: an adversarial machine learning library. arXiv:1610.00768 (2017)
  18. 18.
    Papernot, N., McDonald, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep networks. In: IEEE Symposium on Security & Privacy (2016)Google Scholar
  19. 19.
    Raghunathan, A., Steinhardt, J., Liang, P.: Certified defenses against adversarial examples. In: ICLR (2018)Google Scholar
  20. 20.
    Rauber, J., Brendel, W., Bethge, M.: Foolbox: a python toolbox to benchmark the robustness of machine learning modelsGoogle Scholar
  21. 21.
    Moosavi-Dezfooli, S.-M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: CVPR, pp. 2574–2582 (2016)Google Scholar
  22. 22.
    Stallkamp, J., Schlipsing, M., Salmen, J., Igel, C.: Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition. Neural Netw. 32, 323–332 (2012)CrossRefGoogle Scholar
  23. 23.
    Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR, pp. 2503–2511 (2014)Google Scholar
  24. 24.
    Tjeng, V., Tedrake, R.: Verifying neural networks with mixed integer programming. arXiv:1711.07356v1 (2017)
  25. 25.
    Wong, E., Kolter, J.Z.: Provable defenses against adversarial examples via the convex outer adversarial polytope. arXiv:1711.00851v2 (2018)
  26. 26.
    Yuan, X., He, P., Zhu, Q., Bhat, R.R., Li, X.: Adversarial examples: attacks and defenses for deep learning. arXiv:1712.07107 (2017)

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Mathematics and Computer ScienceSaarland UniversitySaarbrückenGermany
  2. 2.Department of Computer ScienceUniversity of TübingenTübingenGermany

Personalised recommendations