Skip to main content

Evading API Call Sequence Based Malware Classifiers

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 11999))

Abstract

In this paper, we present a mimicry attack to transform malware binary, which can evade detection by API call sequence based malware classifiers. While original malware was detectable by malware classifiers, transformed malware, when run, with modified API call sequence without compromising the payload of the original, is effectively able to avoid detection. Our model is effective against a large set of malware classifiers which includes linear models such as Random Forest (RF), Decision Tree (DT) and XGBoost classifiers and fully connected NNs, CNNs and RNNs and its variants. Our implementation is easy to use (i.e., a malware transformation only requires running a couple of commands) and generic (i.e., works for any malware without requiring malware specific changes). We also show that adversarial retraining can make malware classifiers robust against such evasion attacks.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Keras: The python deep learning library (2019). https://keras.io/

  2. scikit-learn: Machine learning in python (2019). https://scikit-learn.org/stable/

  3. Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018)

    Article  Google Scholar 

  4. Anderson, H.S., Kharkar, A., Filar, B., Roth, P.: Evading Machine Learning Malware Detection. Black Hat, Las Vegas (2017)

    Google Scholar 

  5. Arp, D., Spreitzenbarth, M., Hubner, M., Gascon, H., Rieck, K., Siemens, C.: Drebin: effective and explainable detection of android malware in your pocket. In: NDSS, vol. 14, pp. 23–26 (2014)

    Google Scholar 

  6. Backes, M., Manoharan, P., Grosse, K., Papernot, N.: Adversarial perturbations against deep neural networks for malware classification. The Computing Research Repository (CoRR) (2016)

    Google Scholar 

  7. Chen, L., Ye, Y., Bourlai, T.: Adversarial machine learning in malware detection: arms race between evasion attack and defense. In: 2017 European Intelligence and Security Informatics Conference (EISIC), pp. 99–106. IEEE (2017)

    Google Scholar 

  8. Fleshman, W., Raff, E., Zak, R., McLean, M., Nicholas, C.: Static malware detection and subterfuge: Quantifying the robustness of machine learning and current anti-virus. In: 2018 13th International Conference on Malicious and Unwanted Software (MALWARE), pp. 1–10. IEEE (2018)

    Google Scholar 

  9. Kreuk, F., Barak, A., Aviv-Reuven, S., Baruch, M., Pinkas, B., Keshet, J.: Adversarial examples on discrete sequences for beating whole-binary malware detection. arXiv preprint arXiv:1802.04528 (2018)

  10. Kreuk, F., Barak, A., Aviv-Reuven, S., Baruch, M., Pinkas, B., Keshet, J.: Deceiving end-to-end deep learning malware detectors using adversarial examples. arXiv preprint arXiv:1802.04528 (2018)

  11. Malshare (2019). https://malshare.com/

  12. Mohali, C.: (2019). https://cdac.in/index.aspx?id=mohali

  13. Nachenberg, C., Seshadri, V., Ramzan, Z.: An analysis of real-world effectiveness of reputation-based security. In: Proceedings of the Virus Bulletin Conference (VB) (2010)

    Google Scholar 

  14. Paudice, A., Muñoz-González, L., Gyorgy, A., Lupu, E.C.: Detection of adversarial training examples in poisoning attacks through anomaly detection. arXiv preprint arXiv:1802.03041 (2018)

  15. Rosenberg, I., Shabtai, A., Rokach, L., Elovici, Y.: Generic black-box end-to-end attack against state of the art API call based malware classifiers. In: Bailey, M., Holz, T., Stamatogiannakis, M., Ioannidis, S. (eds.) RAID 2018. LNCS, vol. 11050, pp. 490–510. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00470-5_23

    Chapter  Google Scholar 

  16. Stokes, J.W., Wang, D., Marinescu, M., Marino, M., Bussone, B.: Attack and defense of dynamic analysis-based, adversarial neural malware detection models. In: MILCOM 2018–2018 IEEE Military Communications Conference (MILCOM), pp. 1–8. IEEE (2018)

    Google Scholar 

  17. VirusShare (2019). https://virusshare.com/

  18. Xu, W., Qi, Y., Evans, D.: Automatically evading classifiers, pp. 21–24 (2016)

    Google Scholar 

  19. Zade, H.: Persistent IAT hooking (2018). https://github.com/hasherezade/IAT_patcher

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anand Handa .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Fadadu, F., Handa, A., Kumar, N., Shukla, S.K. (2020). Evading API Call Sequence Based Malware Classifiers. In: Zhou, J., Luo, X., Shen, Q., Xu, Z. (eds) Information and Communications Security. ICICS 2019. Lecture Notes in Computer Science(), vol 11999. Springer, Cham. https://doi.org/10.1007/978-3-030-41579-2_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-41579-2_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-41578-5

  • Online ISBN: 978-3-030-41579-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics