Advertisement

Evading Deep Neural Network and Random Forest Classifiers by Generating Adversarial Samples

  • Erick Eduardo Bernal Martinez
  • Bella Oh
  • Feng Li
  • Xiao LuoEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11358)

Abstract

With recent advancements in computing technology, machine learning and neural networks are becoming more wide-spread in different applications or software, such as intrusion detection applications, antivirus software and so on. Therefore, data safety and privacy protection are increasingly reliant on these models. Deep Neural Networks (DNN) and Random Forests (RF) are two of the most widely-used, accurate classifiers which have been applied to malware detection. Although their effectiveness has been promising, the recent adversarial machine learning research raises the concerns on their robustness and resilience against being attacked or poisoned by adversarial samples. In this particular research, we evaluate the performance of two adversarial sample generation algorithms - Jacobian-based Saliency Map Attack (JSMA) and Fast Gradient Sign Method (FGSM) on poisoning the deep neural networks and random forests models for function call graph based malware detection. The returned results show that FGSM and JSMA gained high success rates by modifying the samples to pass through the trained DNN and RF models.

Keywords

Adversarial Machine Learning Neural Network Random Forest Graphlet 

Notes

Acknowledgment

This research was made possible with the support of the Indiana University-Purdue University Indianapolis Department of Computer Information and Information Science, with funding from National Science Foundation and United States Department of Defense. The author(s) would like to thank to Dr. Mohammad Al Hasan, Tianchong Gao and Sheila Walter for their support.

References

  1. 1.
    Hu, W., Tan, Y.: Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN (2017)Google Scholar
  2. 2.
    Subasi, A., Molah, E., Almkallawi, F., Chaudhery, T.J.: Intelligent phishing website detection using random forest classifier. In: 2017 International Conference on Electrical and Computing Technologies and Applications (ICECTA), Ras Al Khaimah, pp. 1–5 (2017)Google Scholar
  3. 3.
    Peng, W., Gao, T., Sisodia, D., Saha, T.K., Li, F., Al Hasan, M.: ACTS: extracting android app topological signature through graphlet sampling. In: 2016 IEEE Conference on Communications and Network Security (CNS), pp. 37–45 (2016)Google Scholar
  4. 4.
    Bosch, A., Zisserman, A., Munoz, X.: Image classification using random forests and ferns. In: 2007 IEEE 11th International Conference on Computer Vision, Rio de Janeiro, pp. 1–8 (2007)Google Scholar
  5. 5.
    Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., and Swami, A.: The limitations of deep learning in adversarial settings. In: Proceedings of the 1st IEEE European Symposium on Security and Privacy, pp. 372–387 (2016)Google Scholar
  6. 6.
    Feinman, R., Curtin, R.R., Shintre, S., Gardner, A.B.: Detecting Adversarial Samples from Artifacts. arXiv preprint arXiv:1703.00410 (2017)
  7. 7.
    Papernot, N., Carlini, N., Goodfellow, I., Feinman, R.: Cleverhans v2. 0.0: an adversarial machine learning library. arXiv preprint arXiv:1610.00768 (2016)
  8. 8.
    Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. IEEE European Symposium on Security and Privacy (EuroS, P), Saarbrucken, pp. 372–387 (2016)Google Scholar
  9. 9.
    Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security (ASIA CCS 2017), pp. 506–519. ACM, New York, (2017),  https://doi.org/10.1145/3052973.3053009
  10. 10.
    Gao, T. et al.: Android Malware Detection via Graphlet Sampling, pp. 1–14 (2018). UnpublishedGoogle Scholar
  11. 11.
    The WEKA Workbench: Online Appendix for “Data Mining: Practical Machine Learning Tools and Techniques”, 4th edn. Morgan Kaufmann (2016)Google Scholar
  12. 12.
    Goodfellow, I.J., et al.: Explaining and harnessing adversarial examples. In: Proceedings of the International Conference on Learning Representations (2015)Google Scholar
  13. 13.
    Dang, H., Huang, Y., Chang, E.: Evading classifiers by morphing in the dark. In: ACM CCS, pp. 119–133. ACM (2017)Google Scholar
  14. 14.
    Holczer, B.: Random Forest Classifier - Machine Learning. Global Software Support, 7 March 2018. www.globalsoftwaresupport.com/random-forest-classifier-bagging-machine-learning/
  15. 15.
    Elsayed, G., Goodfellow, I., Sohl-Dickstein, J.: Adversarial Reprogramming of Neural Networks (2018). https://arxiv.org/pdf/1806.11146.pdf. Accessed 22 Oct 2018
  16. 16.
    Grosse, K., Papernot, N., Manoharan, P., Backes, M., McDaniel, P.: Adversarial examples for malware detection. In: Foley, S.N., Gollmann, D., Snekkenes, E. (eds.) ESORICS 2017. LNCS, vol. 10493, pp. 62–79. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66399-9_4CrossRefGoogle Scholar
  17. 17.
    Jia, J., Gong, N.Z.: AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning. https://arxiv.org/pdf/1805.04810.pdf. Accessed 22 Oct 2018
  18. 18.
    Kreuk, F., et al.: Deceiving End-to-End Deep Learning Malware Detectors using Adversarial Examples (2018). https://arxiv.org/pdf/1802.04528.pdf. Accessed 22 Oct 2018

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Erick Eduardo Bernal Martinez
    • 1
  • Bella Oh
    • 2
  • Feng Li
    • 3
  • Xiao Luo
    • 3
    Email author
  1. 1.Department of CSIUPUIIndianapolisUSA
  2. 2.Department of CSEMichigan State UniversityEast LansingUSA
  3. 3.Department of CITIUPUIIndianapolisUSA

Personalised recommendations