Skip to main content

‘Security Theater’: On the Vulnerability of Classifiers to Exploratory Attacks

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 10241))

Abstract

The increasing scale and sophistication of cyber-attacks has led to the adoption of machine learning based classification techniques, at the core of cybersecurity systems. These techniques promise scale and accuracy, which traditional rule/signature based methods cannot. However, classifiers operating in adversarial domains are vulnerable to evasion attacks by an adversary, who is capable of learning the behavior of the system by employing intelligently crafted probes. Classification accuracy in such domains provides a false sense of security, as detection can easily be evaded by carefully perturbing the input samples. In this paper, a generic data driven framework is presented, to analyze the vulnerability of classification systems to black box probing based attacks. The framework uses an exploration-exploitation based strategy, to understand an adversary’s point of view of the attack-defense cycle. The adversary assumes a black box model of the defender’s classifier and can launch indiscriminate attacks on it, without information of the defender’s model type, training data or the domain of application. Experimental evaluation on 10 real world datasets demonstrates that even models having high perceived accuracy (>90%), by a defender, can be effectively circumvented with a high evasion rate (>95%, on average). The detailed attack algorithms, adversarial model and empirical evaluation, serve as a background for developing secure machine learning based systems.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://www.statista.com/chart/2540/data-breaches/.

  2. 2.

    https://aws.amazon.com/machine-learning/.

  3. 3.

    https://cloud.google.com/prediction/.

  4. 4.

    https://bigml.com/.

References

  1. Abramson, M.: Toward adversarial online learning and the science of deceptive machines. In: 2015 AAAI Fall Symposium Series (2015)

    Google Scholar 

  2. Akhtar, Z., et al.: Robustness of multi-modal biometric systems under realistic spoof attacks against all traits. In: BIOMS 2011, pp. 1–6. IEEE (2011)

    Google Scholar 

  3. Alabdulmohsin, I.M., et al.: Adding robustness to support vector machines against adversarial reverse engineering. In: Proceedings of the 23rd ACM International Conference on Information and Knowledge Management, pp. 231–240. ACM (2014)

    Google Scholar 

  4. Barreno, M., Nelson, B., Sears, R., Joseph, A.D., Tygar, J.D.: Can machine learning be secure? In: Proceedings of the 2006 ACM Symposium on Information, Computer and Communications Security, pp. 16–25. ACM (2006)

    Google Scholar 

  5. Biggio, B., Corona, I., Maiorca, D., Nelson, B., Šrndić, N., Laskov, P., Giacinto, G., Roli, F.: Evasion attacks against machine learning at test time. In: Blockeel, H., Kersting, K., Nijssen, S., Železný, F. (eds.) ECML PKDD 2013. LNCS (LNAI), vol. 8190, pp. 387–402. Springer, Heidelberg (2013). doi:10.1007/978-3-642-40994-3_25

    Chapter  Google Scholar 

  6. Biggio, B., Fumera, G., Roli, F.: Security evaluation of pattern classifiers under attack. IEEE Trans. Knowl. Data Eng. 26(4), 984–996 (2014)

    Article  Google Scholar 

  7. Bilge, L., Dumitras, T.: Before we knew it: an empirical study of zero-day attacks in the real world. In: Proceedings of the 2012 ACM Conference on Computer and Communications Security, pp. 833–844. ACM (2012)

    Google Scholar 

  8. Chawla, N.V., et al.: SMOTE: synthetic minority over-sampling technique. J. Art. Intell. Res. 16, 321–357 (2002)

    MATH  Google Scholar 

  9. Ditzler, G., Roveri, M., Alippi, C., Polikar, R.: Learning in nonstationary environments: a survey. IEEE Comput. Intell. Mag. 10(4), 12–25 (2015)

    Article  Google Scholar 

  10. D’Souza, D.F.: Avatar captcha: telling computers and humans apart via face classification and mouse dynamics. Electronic Theses and Dissertations-1715 (2014)

    Google Scholar 

  11. Kantchelian, A., et al.: Approaches to adversarial drift. In: Proceedings of the 2013 ACM Workshop on Artificial Intelligence and Security, pp. 99–110. ACM (2013)

    Google Scholar 

  12. Li, H., Chan, P.P.K.: An improved reject on negative impact defense. In: Wang, X., Pedrycz, W., Chan, P., He, Q. (eds.) ICMLC 2014. CCIS, vol. 481, pp. 452–459. Springer, Heidelberg (2014). doi:10.1007/978-3-662-45652-1_45

    Google Scholar 

  13. Lichman, M.: UCI machine learning repository (2013). http://archive.ics.uci.edu/ml

  14. Lowd, D., Meek, C.: Adversarial learning. In: Proceedings of the Eleventh ACM SIGKDD International Conference on Knowledge Discovery in Data Mining, pp. 641–647. ACM (2005)

    Google Scholar 

  15. Lowd, D., Meek, C.: Good word attacks on statistical spam filters. In: CEAS (2005)

    Google Scholar 

  16. Nelson, B., Rubinstein, B.I., Huang, L., Joseph, A.D., Lau, S., Lee, S.J., Rao, S., Tran, A., Tygar, J.D.: Near-optimal evasion of convex-inducing classifiers. In: AISTATS, pp. 549–556 (2010)

    Google Scholar 

  17. Papernot, N., et al.: The limitations of deep learning in adversarial settings. In: IEEE European Symposium on Security and Privacy, pp. 372–387. IEEE (2016)

    Google Scholar 

  18. Papernot, N., McDaniel, P., Goodfellow, I.: Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277 (2016)

  19. Pedregosa, F., et al.: Scikit-learn: machine learning in python. J. Mach. Learn. Res. 2825–2830 (2011)

    Google Scholar 

  20. Prakash, P., et al.: Phishnet: predictive blacklisting to detect phishing attacks. In: Proceedings of IEEE INFOCOM, pp. 1–5. IEEE (2010)

    Google Scholar 

  21. Shokri, R., Stronati, M., Shmatikov, V.: Membership inference attacks against machine learning models. arXiv preprint arXiv:1610.05820 (2016)

  22. Smutz, C., Stavrou, A.: When a tree falls: using diversity in ensemble classifiers to identify evasion in malware detectors. In: NDSS Symposium (2016)

    Google Scholar 

  23. Tramèr, F., Zhang, F., Juels, A., Reiter, M.K., Ristenpart, T.: Stealing machine learning models via prediction APIs. arXiv preprint arXiv:1609.02943 (2016)

  24. Wang, L., Hu, X., Yuan, B., Lu, J.: Active learning via query synthesis and nearest neighbour search. Neurocomputing 147, 426–434 (2015)

    Article  Google Scholar 

  25. Xu, L., Zhan, Z., Xu, S., Ye, K.: An evasion and counter-evasion study in malicious websites detection. In: 2014 IEEE Conference on Communications and Network Security (CNS), pp. 265–273. IEEE (2014)

    Google Scholar 

  26. Xu, W., Qi, Y., Evans, D.: Automatically evading classifiers. In: Proceedings of the 2016 Network and Distributed Systems Symposium (2016)

    Google Scholar 

  27. Zhou, Y., Kantarcioglu, M.: Modeling adversarial learning as nested stackelberg games. In: Bailey, J., Khan, L., Washio, T., Dobbie, G., Huang, J.Z., Wang, R. (eds.) PAKDD 2016. LNCS (LNAI), vol. 9652, pp. 350–362. Springer, Cham (2016). doi:10.1007/978-3-319-31750-2_28

    Chapter  Google Scholar 

  28. Zhou, Y., Kantarcioglu, M., Thuraisingham, B., Xi, B.: Adversarial support vector machine learning. In: Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1059–1067. ACM (2012)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tegjyot Singh Sethi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Sethi, T.S., Kantardzic, M., Ryu, J.W. (2017). ‘Security Theater’: On the Vulnerability of Classifiers to Exploratory Attacks. In: Wang, G., Chau, M., Chen, H. (eds) Intelligence and Security Informatics. PAISI 2017. Lecture Notes in Computer Science(), vol 10241. Springer, Cham. https://doi.org/10.1007/978-3-319-57463-9_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-57463-9_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-57462-2

  • Online ISBN: 978-3-319-57463-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics