Skip to main content

Adversarial Attacks on Anomaly Detection

  • Living reference work entry
  • First Online:
Encyclopedia of Machine Learning and Data Science

Abstract

As anomaly detection takes on an ever-increasing role in safety-critical systems to detect deviations from normal behavior, adversaries have a burgeoning incentive to bypass it. The rise of attacks on machine learning systems has led to the development of myriad methods applicable at either training time, test time, or both to induce the misclassification of inputs during deployment. These methods have also recently been extended to anomaly detectors, and this chapter aims to provide a summary. In response, work on securing anomaly detectors against adversaries has also emerged. Defenses typically aim to sanitize the training data, simulate an attack during training, or adopt data transformations that make the detectors less sensitive to perturbations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

References

  • Bhargava R, Clifton C (2018) Anomaly detection under poisoning attacks. In: Proceedings of the ODD v5. 0: outlier detection de-constructed workshop, 24th ACM SIGKDD international conference on knowledge discovery and data mining (KDD)

    Google Scholar 

  • Biggio B, Nelson B, Laskov P (2012) Poisoning attacks against support vector machines. In: Proceedings of the 29th international conference on international conference on machine learning, ICML’12, pp 1467–1474

    Google Scholar 

  • Bitterwolf J, Meinke A, Hein M (2020) Certifiably adversarially robust detection of out-of-distribution data. Adv Neural Inf Process Syst (NIPS) 33:16085–16095

    Google Scholar 

  • Chandola V, Banerjee A, Kumar V (2016) Anomaly detection. In: Sammut C, Webb GI (eds) Encyclopedia of machine learning and data mining. Springer, Boston, pp 1–15

    Google Scholar 

  • Du M, Jia R, Song D (2020) Robust anomaly detection and backdoor attack detection via differential privacy. In: International conference on learning representations (ICLR)

    Google Scholar 

  • Du M, Li F, Zheng G, Srikumar V (2017) DeepLog: anomaly detection and diagnosis from system logs through deep learning. In: Proceedings of the 2017 ACM SIGSAC conference on computer and communications security (CCS), pp 1285–1298

    Google Scholar 

  • Fürnkranz J (2017) Decision tree. Springer, Boston, pp 330–335

    Google Scholar 

  • Goodge A, Hooi B, Ng SK, Ng WS (2021) Robustness of autoencoders for anomaly detection under adversarial impact. In: Proceedings of the twenty-ninth international joint conference on artificial intelligence, IJCAI’20, Yokohama, pp 1244–1250

    Google Scholar 

  • Han D, Wang Z, Zhong Y, Chen W, Yang J, Lu S, Shi X, Yin X (2021) Evaluating and improving adversarial robustness of machine learning-based network intrusion detectors. IEEE J Sel Areas Commun (J-SAC) 39(8):2632–2647

    Article  Google Scholar 

  • Hendrycks D, Gimpel K (2017) A baseline for detecting misclassified and out-of-distribution examples in neural networks. In: International conference on learning representations (ICLR)

    Google Scholar 

  • Izenman AJ (2013) Linear discriminant analysis. In: Modern multivariate statistical techniques. Springer, New York, pp 237–280

    Chapter  Google Scholar 

  • Jia Y, Wang J, Poskitt CM, Chattopadhyay S, Sun J, Chen Y (2021) Adversarial attacks and mitigation for anomaly detectors of cyber-physical systems. Int J Crit Infrastruct Prot (IJCIP) 34:100452. arXiv: 2105.10707

    Google Scholar 

  • Kleinbaum DG, Dietz K, Gail M, Klein M, Klein M (2002) Logistic regression. Springer, New York

    Google Scholar 

  • Kloft M, Laskov P (2010) Online anomaly detection under adversarial impact. In: AISTATS, pp 405–412

    Google Scholar 

  • Li T, Sahu AK, Talwalkar A, Smith V (2020) Federated learning: challenges, methods, and future directions IEEE Sig Process Mag 37(3):50–60

    Article  Google Scholar 

  • Liang S, Li Y, Srikant R (2018) Enhancing the reliability of out-of-distribution image detection in neural networks. In: International conference on learning representations (ICLR)

    Google Scholar 

  • Liu Y, Kumar N, Xiong Z, Lim WYB, Kang J, Niyato D (2020) Communication-efficient federated learning for anomaly detection in industrial internet of things. In: GLOBECOM 2020 IEEE global communications conference. IEEE, pp 1–6

    Google Scholar 

  • Lo S-Y, Oza P, Patel VM (2021) Adversarially robust one-class novelty detection. arXiv preprint arXiv:2108.11168

    Google Scholar 

  • Mirsky Y, Doitshman T, Elovici Y, Shabtai A (2018) Kitsune: an ensemble of autoencoders for online network intrusion detection. arXiv preprint arXiv:1802.09089

    Google Scholar 

  • Nguyen TD, Rieger P, Miettinen M, Sadeghi A-R (2020) Poisoning attacks on federated learning-based iot intrusion detection system. In: Proceedings of workshop decentralized IoT systems and security (DISS), pp 1–7

    Google Scholar 

  • Rubinstein BI, Nelson B, Huang L, Joseph AD, Lau S-H, Taft N, Tygar JD (2008) Evading anomaly detection through variance injection attacks on PCA. In: International workshop on recent advances in intrusion detection. Springer, pp 394–395

    Google Scholar 

  • Ruff L, Kauffmann JR, Vandermeulen RA, Montavon G, Samek W, Kloft M, Dietterich TG, Müller K-R (2021) A unifying review of deep and shallow anomaly detection. In: Proceedings of the IEEE

    Book  Google Scholar 

  • Sammut C, Webb GI (eds) (2017a) Random decision forests. Springer, Boston, pp 1054–1054

    Google Scholar 

  • Sammut C, Webb GI (eds) (2017b) Supervised learning. Springer, Boston, pp 1213–1214

    Google Scholar 

  • Sammut C, Webb GI (eds) (2017c) Unsupervised learning. Springer, Boston, pp 1304–1304

    Google Scholar 

  • Schmidhuber J (2016) Deep learning. Springer, Boston, pp 1–11

    Google Scholar 

  • Schneider M, Aspinall D, Bastian ND (2021) Evaluating model robustness to adversarial samples in network intrusion detection. In: 2021 IEEE international conference on big data (Big Data). IEEE, pp 3343–3352

    Google Scholar 

  • Sehwag V, Bhagoji AN, Song L, Sitawarin C, Cullina D, Chiang M, Mittal P (2019) Analyzing the robustness of open-world machine learning. In: Proceedings of the 12th ACM workshop on artificial intelligence and security, pp 105–116

    Google Scholar 

  • Song L, Sehwag V, Bhagoji AN, Mittal P (2020) A critical evaluation of open-world machine learning. In: ICML workshop on uncertainty and robustness in deep learning

    Google Scholar 

  • Srivastava S, Gupta MR, Frigyik BA (2007) Bayesian quadratic discriminant analysis. J Mach Learn Res 8(6):1277–1305

    MathSciNet  MATH  Google Scholar 

  • Taud H, Mas J (2018) Multilayer perceptron (MLP). In: Geomatic approaches for modeling land change scenarios. Springer, Cham, pp 451–455

    Chapter  Google Scholar 

  • Yang C, Zhou L, Wen H, Wu Y (2020) U-ASG: a universal method to perform adversarial attack on autoencoder based network anomaly detection systems. In: IEEE INFOCOM 2020 – IEEE conference on computer communications workshops (INFOCOM WKSHPS), pp 68–73

    Google Scholar 

  • Zhang C, Costa-Pérez X, Patras P (2022) Adversarial attacks against deep learning-based network intrusion detection systems and defense mechanisms. IEEE/ACM Trans Netw 30(3):1294–1311

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paria Shirani .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Science+Business Media, LLC, part of Springer Nature

About this entry

Check for updates. Verify currency and authenticity via CrossMark

Cite this entry

Bhagoji, A.N., Shirani, P. (2022). Adversarial Attacks on Anomaly Detection. In: Phung, D., Webb, G.I., Sammut, C. (eds) Encyclopedia of Machine Learning and Data Science. Springer, New York, NY. https://doi.org/10.1007/978-1-4899-7502-7_998-1

Download citation

  • DOI: https://doi.org/10.1007/978-1-4899-7502-7_998-1

  • Published:

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4899-7502-7

  • Online ISBN: 978-1-4899-7502-7

  • eBook Packages: Springer Reference Computer SciencesReference Module Computer Science and Engineering

Publish with us

Policies and ethics