Skip to main content

Analyzing the Footprint of Classifiers in Adversarial Denial of Service Contexts

  • Conference paper
  • First Online:
Progress in Artificial Intelligence (EPIA 2019)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11805))

Included in the following conference series:

Abstract

Adversarial machine learning is an area of study that examines both the generation and detection of adversarial examples, which are inputs specially crafted to deceive classifiers, and has been extensively researched specifically in the area of image recognition, where humanly imperceptible modifications are performed on images that cause a classifier to perform incorrect predictions.

The main objective of this paper is to study the behavior of multiple state of the art machine learning algorithms in an adversarial context. To perform this study, six different classification algorithms were used on two datasets, NSL-KDD and CICIDS2017, and four adversarial attack techniques were implemented with multiple perturbation magnitudes. Furthermore, the effectiveness of training the models with adversaries to improve recognition is also tested. The results show that adversarial attacks successfully deteriorate the performance of all the classifiers between 13% and 40%, with the Denoising Autoencoder being the technique with highest resilience to attacks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chollet, F., et al.: Keras. https://keras.io. Accessed June 2019

  2. Kdd99 dataset (KDD Cup 1999 data). http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html. Accessed June 2019

  3. NSL-KDD Dataset. https://www.unb.ca/cic/datasets/nsl.html. Accessed June 2019

  4. Ring, M., et al.: A survey of network-based intrusion detection data sets (2019). 12 Authors Suppressed Due to Excessive Length https://doi.org/10.1016/j.cose.2019.06.005

    Article  Google Scholar 

  5. Domingos, P.: The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. Basic Books Inc., New York (2018)

    Google Scholar 

  6. Duddu, V.: A survey of adversarial machine learning in cyber warfare. Def. Sci. J. 68(4), 356–366 (2018)

    Article  Google Scholar 

  7. Lin, Z., et al.: IDSGAN: generative adversarial networks for attack generation against intrusion detection (2018). arXiv:1809.02077

  8. Papernot, N., et al.: Technical report on the CleverHans v2.1.0 adversarial examples library (2018). arXiv:1610.00768

  9. Sharafaldin, I., et al.: Toward generating a new intrusion detection dataset and intrusion traffic characterization. In: 4th International Conference on Information Systems Security and Privacy (2018)

    Google Scholar 

  10. Wang, Z.: Deep learning-based intrusion detection with adversaries. IEEE Access 6, 38:367–38:384 (2018)

    Article  Google Scholar 

  11. Frazão, I., Abreu, P.H., Cruz, T., Araújo, H., Simões, P.: Denial of service attacks: detecting the frailties of machine learning algorithms in the classification process. In: Luiijf, E., Žutautaitė, I., Hämmerli, B.M. (eds.) CRITIS 2018. LNCS, vol. 11260, pp. 230–235. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-05849-4_19

    Chapter  Google Scholar 

  12. Rigaki, M., et al.: Adversarial deep learning against intrusion detection classifiers. Master’s thesis, Information Security’s master dissertation, Luleå University of Technology (2017)

    Google Scholar 

  13. Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous systems (2016). arXiv:1603.04467v2

  14. Carlini, N., et al.: Towards evaluating the robustness of neural networks (2016). arXiv:1608.04644

  15. Dhanabal, L., Shantharajah, D.S.P.: A study on NSL-KDD dataset for intrusion detection system based on classification algorithms. Int. J. Adv. Res. Comp. Comm. Eng. 4(6), 446–452 (2015)

    Google Scholar 

  16. Goodfellow, I., et al.: Explaining and harnessing adversarial examples (2015). arXiv:1412.6572

  17. Moosavi-Dezfooli, S., et al.: DeepFool: a simple and accurate method to fool deep neural networks (2015). arXiv:1511.04599

  18. Papernot, N., et al.: The limitations of deep learning in adversarial settings (2015). arXiv:1511.07528

  19. Zamani, M.: Machine learning techniques for intrusion detection (2013). arXiv:1312.2177

  20. Huang, L., et al.: Adversarial machine learning. In: Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence, Illinois, USA, Chicago, pp. 43–58 (2011)

    Google Scholar 

  21. Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)

    MathSciNet  MATH  Google Scholar 

  22. Demšar, J.: Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 7, 1–30 (2006)

    MathSciNet  MATH  Google Scholar 

  23. Kemmerer, R.A.: Cybersecurity. In: 25th International Conference on Software Engineering, pp. 705–715 (2003)

    Google Scholar 

Download references

Acknowledgements

This work was supported by the ATENA European H2020 Project (H2020-DS-2015-1 Project 700581).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nuno Martins .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Martins, N., Cruz, J.M., Cruz, T., Abreu, P.H. (2019). Analyzing the Footprint of Classifiers in Adversarial Denial of Service Contexts. In: Moura Oliveira, P., Novais, P., Reis, L. (eds) Progress in Artificial Intelligence. EPIA 2019. Lecture Notes in Computer Science(), vol 11805. Springer, Cham. https://doi.org/10.1007/978-3-030-30244-3_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-30244-3_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-30243-6

  • Online ISBN: 978-3-030-30244-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics