Skip to main content

Adversarial Learning on Malware

  • Living reference work entry
  • First Online:
Encyclopedia of Machine Learning and Data Science

Abstract

With the cybersecurity systems’ growing dependence on machine learning models, it is important to understand how an individual, organization, or government may take advantage of, or deceive, these models. In order to build models that are robust against adversarial methods, we must first understand adversarial techniques. This area of research is known as adversarial learning, and it has seen massive growth over the last 15 years. Adversarial learning research is critical to the cybersecurity domain. With the increase in machine learning used in malware detection, an arms race between adversaries and network defenders has emerged. Adversarial learning on malware focuses on how malware can deceive malware detection models and how malware detection models can be built against deception. The three sections of adversarial learning are knowledge, space, and strategy. Knowledge describes how much an adversary knows about the target system. Space refers to where the adversary is making their attack. Strategy refers to when the adversary is making their attack. Adversarial learning on malware has succeeded on a wide range of malware types that target many systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

References

  • Anderson HS, Kharkar A, Filar B, Evans D, Roth P (2018) Learning to evade static pe machine learning malware models via reinforcement learning, arXiv preprint arXiv:1801.08917

    Google Scholar 

  • Biggio B, Roli F (2018) Wild patterns: ten years after the rise of adversarial machine learning. Pattern Recogn 84:317–331

    Article  Google Scholar 

  • Biggio B, Corona I, Maiorca D, Nelson B, Srndic N, Laskov P, Giacinto G, Roli F (2013) Evasion attacks against machine learning at test time. In: Blockeel H, Kersting K, Nijssen S, Zelezný F (eds) Proceedings of machine learning and knowledge discovery in databases – European conference, ECML PKDD 2013, Prague, 23–27 Sept 2013, Proceedings, Part III’. Lecture notes in computer science, vol 8190. Springer

    Google Scholar 

  • Carlini N, Athalye A, Papernot N, Brendel W, Rauber J, Tsipras D, Goodfellow I, Madry A, Kurakin A (2019) On evaluating adversarial robustness, arXiv arXiv–1902

    Google Scholar 

  • Chen S, Xue M, Fan L, Hao S, Xu L, Zhu H, Li B (2018) Automated poisoning attacks and defenses in malware detection systems: an adversarial machine learning approach. Comput Secur 73:326–344

    Article  Google Scholar 

  • Dalvi N, Domingos P, Sanghai S, Verma D (2004) Adversarial classification. In: Proceedings of the tenth ACM SIGKDD international conference on knowledge discovery and data mining

    Book  Google Scholar 

  • Dang H, Huang Y, Chang E (2017) Evading classifiers by morphing in the dark. In: Thuraisingham BM, Evans D, Malkin T, Xu D (eds) Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, CCS 2017, Dallas, 30 Oct–03 Nov 2017. ACM

    Google Scholar 

  • Demontis A, Melis M, Biggio B, Maiorca D, Arp D, Rieck K, Corona I, Giacinto G, Roli F (2019) Yes, machine learning can be more secure! A case study on android malware detection. IEEE Trans Dependable Secur Comput 16(4):711–724

    Article  Google Scholar 

  • Fass A, Backes M, Stock B (2019) Hidenoseek: camouflaging malicious javascript in benign asts. In: Cavallaro L, Kinder J, Wang X, Katz J (eds) Proceedings of the 2019 ACM SIGSAC conference on computer and communications security, CCS 2019, London, 11–15 Nov 2019. ACM

    Google Scholar 

  • Fogla P, Sharif MI, Perdisci R, Kolesnikov OM, Lee W (2006) Polymorphic blending attacks. In: Keromytis AD (ed) Proceedings of the 15th USENIX security symposium, Vancouver, 31 July–4 Aug 2006. USENIX Association

    Google Scholar 

  • Gu T, Dolan-Gavitt B, Garg S (2017) Badnets: identifying vulnerabilities in the machine learning model supply chain, arXiv arXiv–1708

    Google Scholar 

  • Hu W, Tan Y (n.d.) Generating adversarial malware examples for black-box attacks based on gan

    Google Scholar 

  • Jo J, Bengio Y (2017) Measuring the tendency of cnns to learn surface statistical regularities, arXiv arXiv–1711

    Google Scholar 

  • Kerckhoffs A (1883) Military cryptography. French J Mil Sci 9:161–191

    Google Scholar 

  • Khasawneh KN, Abu-Ghazaleh NB, Ponomarev D, Yu L (2017) RHMD: evasion-resilient hardware malware detectors. In: Hunter HC, Moreno J, Emer JS, Sánchez D (eds) Proceedings of the 50th annual IEEE/ACM international symposium on microarchitecture, MICRO 2017, Cambridge, MA, 14–18 Oct 2017. ACM

    Google Scholar 

  • Kolosnjaji B, Demontis A, Biggio B, Maiorca D, Giacinto G, Eckert C, Roli F (2018) Adversarial malware binaries: evading deep learning for malware detection in executables. In: Proceedings of the 26th European signal processing conference, EUSIPCO 2018, Roma, 3–7 Sept 2018. IEEE

    Google Scholar 

  • Laskov P, Srndic N (2011) Static detection of malicious javascript-bearing PDF documents. In: Zakon RH, McDermott JP, Locasto ME (eds) Twenty-seventh annual computer security applications conference, ACSAC 2011, Orlando, 5–9 Dec 2011. ACM

    Google Scholar 

  • Lowd D, Meek C (2005a) Adversarial learning. In: Grossman R, Bayardo RJ, Bennett KP (eds) Proceedings of the eleventh ACM SIGKDD international conference on knowledge discovery and data mining, Chicago, 21–24 Aug 2005. ACM

    Google Scholar 

  • Lowd D, Meek C (2005b) Good word attacks on statistical spam filters. In: CEAS 2005 – second conference on email and anti-spam, 21–22 July 2005, Stanford University, California

    Google Scholar 

  • Maiorca D, Giacinto G, Corona I (2012) A pattern recognition system for malicious PDF files detection. In: Perner P (ed) Proceedings of machine learning and data mining in pattern recognition – 8th international conference, MLDM 2012, Berlin, 13–20 July 2012. Proceedings. Lecture notes in computer science, vol 7376. Springer

    Google Scholar 

  • Maiorca D, Corona I, Giacinto G (2013) Looking at the bag is not enough to find the bomb: an evasion of structural methods for malicious PDF files detection. In: Chen K, Xie Q, Qiu W, Li N, Tzeng W (eds) Proceedings of the 8th ACM symposium on information, computer and communications security, ASIA CCS. ACM

    Google Scholar 

  • Pierazzi F, Pendlebury F, Cortellazzi J, Cavallaro L (2020) Intriguing properties of adversarial ML attacks in the problem space. In: 2020 IEEE symposium on security and privacy, SP 2020, San Francisco, 18–21 May 2020. IEEE

    Google Scholar 

  • Rosenberg I, Shabtai A, Rokach L, Elovici Y (2018) Generic black-box end-to-end attack against state of the art API call based malware classifiers. In: Bailey M, Holz T, Stamatogiannakis M, Ioannidis S (eds) Proceedings of the 21st international symposium on the research in attacks, intrusions, and defenses, RAID 2018. Lecture notes in computer science, vol 11050. Springer

    Google Scholar 

  • Rosenberg I, Shabtai A, Elovici Y, Rokach L (2020) Adversarial learning in the cyber security domain, arXiv e-prints arXiv–2007

    Google Scholar 

  • Song L, Shokri R, Mittal P (2019) Privacy risks of securing machine learning models against adversarial examples. In: Cavallaro L, Kinder J, Wang X, Katz J (eds) Proceedings of the 2019 ACM SIGSAC conference on computer and communications security, CCS 2019, London, 11–15 Nov 2019. ACM

    Google Scholar 

  • Song W, Li X, Afroz S, Garg D, Kuznetsov D, Yin H (2020) Automatic generation of adversarial examples for interpreting malware classifiers, arXiv arXiv–2003

    Google Scholar 

  • Stephenson N, Dániell K (1992) Snow crash, Metropolis Media

    Google Scholar 

  • Takemura T, Yanai N, Fujiwara T (2020) Model extraction attacks against recurrent neural networks, arXiv arXiv–2002

    Google Scholar 

  • Xu W, Qi Y, Evans D (2016) Automatically evading classifiers: a case study on PDF malware classifiers. In: 23rd annual network and distributed system security symposium, NDSS 2016, San Diego, 21–24 Feb 2016. The Internet Society

    Google Scholar 

  • Yang W, Kong D, Xie T, Gunter CA (2017) Malware detection in adversarial settings: exploiting feature evolutions and confusions in android apps. In: Proceedings of the 33rd annual computer security applications conference, Orlando, 4–8 Dec 2017. ACM

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Steven H. H. Ding .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Science+Business Media, LLC, part of Springer Nature

About this entry

Check for updates. Verify currency and authenticity via CrossMark

Cite this entry

Molloy, C., Mansour, Z., Ding, S.H.H. (2022). Adversarial Learning on Malware. In: Phung, D., Webb, G.I., Sammut, C. (eds) Encyclopedia of Machine Learning and Data Science. Springer, New York, NY. https://doi.org/10.1007/978-1-4899-7502-7_982-1

Download citation

  • DOI: https://doi.org/10.1007/978-1-4899-7502-7_982-1

  • Published:

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4899-7502-7

  • Online ISBN: 978-1-4899-7502-7

  • eBook Packages: Springer Reference Computer SciencesReference Module Computer Science and Engineering

Publish with us

Policies and ethics