Machine Learning

, Volume 107, Issue 3, pp 481–508 | Cite as

Analysis of classifiers’ robustness to adversarial perturbations

Article

Abstract

The goal of this paper is to analyze the intriguing instability of classifiers to adversarial perturbations (Szegedy et al., in: International conference on learning representations (ICLR), 2014). We provide a theoretical framework for analyzing the robustness of classifiers to adversarial perturbations, and show fundamental upper bounds on the robustness of classifiers. Specifically, we establish a general upper bound on the robustness of classifiers to adversarial perturbations, and then illustrate the obtained upper bound on two practical classes of classifiers, namely the linear and quadratic classifiers. In both cases, our upper bound depends on a distinguishability measure that captures the notion of difficulty of the classification task. Our results for both classes imply that in tasks involving small distinguishability, no classifier in the considered set will be robust to adversarial perturbations, even if a good accuracy is achieved. Our theoretical framework moreover suggests that the phenomenon of adversarial instability is due to the low flexibility of classifiers, compared to the difficulty of the classification task (captured mathematically by the distinguishability measure). We further show the existence of a clear distinction between the robustness of a classifier to random noise and its robustness to adversarial perturbations. Specifically, the former is shown to be larger than the latter by a factor that is proportional to \(\sqrt{d}\) (with d being the signal dimension) for linear classifiers. This result gives a theoretical explanation for the discrepancy between the two robustness properties in high dimensional problems, which was empirically observed by Szegedy et al. in the context of neural networks. We finally show experimental results on controlled and real-world data that confirm the theoretical analysis and extend its spirit to more complex classification schemes.

Keywords

Adversarial examples Classification robustness Random noise Instability Deep networks 

Notes

Acknowledgements

We thank the anonymous reviewers for their detailed comments. We thank Hamza Fawzi, Ian Goodfellow for discussions and comments on an early draft of the paper, and Guillaume Aubrun for pointing out a reference for Theorem 4. We also thank Seyed Mohsen Moosavi for his help in preparing experiments.

References

  1. Barreno, M., Nelson, B., Sears, R., Joseph, A., & Tygar, D. (2006). Can machine learning be secure? In ACM symposium on information, computer and communications security (pp. 16–25).Google Scholar
  2. Bendale, A., & Boult, T. E. (2016). Towards open set deep networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1563–1572).Google Scholar
  3. Bhatia, R. (2013). Matrix analysis (Vol. 169). Berlin: Springer.MATHGoogle Scholar
  4. Biggio, B., Corona, I., Maiorca, D., Nelson, B., Šrndić, N., Laskov, P., et al. (2013). Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases (pp. 387–402). Berlin: Springer.Google Scholar
  5. Biggio, B., Nelson, B., & Laskov, P. (2012). Poisoning attacks against support vector machines. In International conference on machine learning (ICML).Google Scholar
  6. Bousquet, O., & Elisseeff, A. (2002). Stability and generalization. The Journal of Machine Learning Research, 2, 499–526.MathSciNetMATHGoogle Scholar
  7. Caramanis, C., Mannor, S., & Xu, H. (2012). Robust optimization in machine learning. In S. Sra, S. Nowozin, & S. J. Wright (Eds.), Optimization for machine learning. Cambridge: MIT Press. chap 14.Google Scholar
  8. Carlini, N., & Wagner, D. (2016). Towards evaluating the robustness of neural networks. arXiv preprint arXiv:1608.04644.
  9. Chalupka, K., Perona, P., & Eberhardt, F. (2014). Visual causal feature learning. arXiv preprint arXiv:1412.2309.
  10. Chang, C. C., & Lin, C. J. (2011). LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2, 27:1–27:27.CrossRefGoogle Scholar
  11. Chang, Y. W., Hsieh, C. J., Chang, K. W., Ringgaard, M., & Lin, C. J. (2010). Training and testing low-degree polynomial data mappings via linear SVM. The Journal of Machine Learning Research, 11, 1471–1490.MathSciNetMATHGoogle Scholar
  12. Dalvi, N., Domingos, P., Sanghai, S., & Verma, D. (2004). Adversarial classification. In ACM SIGKDD (pp. 99–108).Google Scholar
  13. Dekel, O., Shamir, O., & Xiao, L. (2010). Learning to classify with missing and corrupted features. Machine Learning, 81(2), 149–178.MathSciNetCrossRefGoogle Scholar
  14. Fan, R. W., Chang, K. W., Hsieh, C. J., Wang, X. R., & Lin, C. J. (2008). Liblinear: A library for large linear classification. The Journal of Machine Learning Research, 9, 1871–1874.MATHGoogle Scholar
  15. Fawzi, A., & Frossard, P. (2015) Manitest: Are classifiers really invariant? In British machine vision conference (BMVC) (pp. 106.1–106.13).Google Scholar
  16. Goldberg, Y., & Elhadad, M. (2008). splitsvm: Fast, space-efficient, non-heuristic, polynomial kernel computation for nlp applications. In 46th Annual meeting of the association for computational linguistics on human language technologies: Short papers (pp. 237–240).Google Scholar
  17. Goodfellow, I. (2015). Adversarial examples. http://www.iro.umontreal.ca/~memisevr/dlss2015/goodfellow_adv.pdf, presentation at the Deep Learning Summer School, Montreal.
  18. Goodfellow, I., Shlens, J., & Szegedy, C. (2015). Explaining and harnessing adversarial examples. In International conference on learning representations.Google Scholar
  19. Gu, S., & Rigazio, L. (2014). Towards deep neural network architectures robust to adversarial examples. arXiv preprint arXiv:1412.5068.
  20. Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science, University of Toronto.Google Scholar
  21. Lanckriet, G., Ghaoui, L., Bhattacharyya, C., & Jordan, M. (2003). A robust minimax approach to classification. The Journal of Machine Learning Research, 3, 555–582.MathSciNetMATHGoogle Scholar
  22. LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324.CrossRefGoogle Scholar
  23. Lewis, A., & Pang, J. (1998). Error bounds for convex inequality systems. In J.-P. Crouzeix, J.-E. Martinez-Legaz, & M. Volle (Eds.), Generalized convexity, generalized monotonicity: Recent results (pp. 75–110). Berlin: Springer.Google Scholar
  24. Li, G., Mordukhovich, B. S., & Pham, T. S. (2015). New fractional error bounds for polynomial systems with applications to Hölderian stability in optimization and spectral theory of tensors. Mathematical Programming, 153(2), 333–362.Google Scholar
  25. Łojasiewicz, S. (1961). Sur le probleme de la division (to complete).Google Scholar
  26. Lowe, D. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110.CrossRefGoogle Scholar
  27. Lugosi, G., & Pawlak, M. (1994). On the posterior-probability estimate of the error rate of nonparametric classification rules. IEEE Transactions on Information Theory, 40(2), 475–481.MathSciNetCrossRefMATHGoogle Scholar
  28. Luo, X., & Luo, Z. (1994). Extension of Hoffman’s error bound to polynomial systems. SIAM Journal on Optimization, 4(2), 383–392.MathSciNetCrossRefMATHGoogle Scholar
  29. Luo, Z. Q., & Pang, J. S. (1994). Error bounds for analytic systems and their applications. Mathematical Programming, 67(1–3), 1–28.MathSciNetCrossRefMATHGoogle Scholar
  30. Matoušek, J. (2002). Lectures on discrete geometry (Vol. 108). New York: Springer.CrossRefMATHGoogle Scholar
  31. Moosavi-Dezfooli, S. M., Fawzi, A., & Frossard, P. (2016). Deepfool: A simple and accurate method to fool deep neural networks. In IEEE conference on computer vision and pattern recognition (CVPR).Google Scholar
  32. Ng, K., & Zheng, X. (2003). Error bounds of constrained quadratic functions and piecewise affine inequality systems. Journal of Optimization Theory and Applications, 118(3), 601–618.MathSciNetCrossRefMATHGoogle Scholar
  33. Nguyen, A., Yosinski, J., & Clune, J. (2014). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. arXiv preprint arXiv:1412.1897.
  34. Pang, J. (1997). Error bounds in mathematical programming. Mathematical Programming, 79(1–3), 299–332.MathSciNetMATHGoogle Scholar
  35. Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
  36. Srndic, N., & Laskov, P. (2014). Practical evasion of a learning-based classifier: A case study. In IEEE symposium on security and privacy (pp. 197–211). IEEE.Google Scholar
  37. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., et al. (2014). Intriguing properties of neural networks. In International conference on learning representations (ICLR).Google Scholar
  38. Xu, H., Caramanis, C., & Mannor, S. (2009). Robustness and regularization of support vector machines. The Journal of Machine Learning Research, 10, 1485–1510.MathSciNetMATHGoogle Scholar

Copyright information

© The Author(s) 2017

Authors and Affiliations

  1. 1.Signal Processing Laboratory (LTS4)EPFLLausanneSwitzerland
  2. 2.LIPENS de LyonLyonFrance

Personalised recommendations