Advertisement

Adversarial Attack, Defense, and Applications with Deep Learning Frameworks

  • Zhizhou YinEmail author
  • Wei Liu
  • Sanjay Chawla
Chapter
Part of the Advanced Sciences and Technologies for Security Applications book series (ASTSA)

Abstract

In recent years, deep learning frameworks have been applied in many domains and achieved promising performance. However, recent work have demonstrated that deep learning frameworks are vulnerable to adversarial attacks. A trained neural network can be manipulated by small perturbations added to legitimate samples. In computer vision domain, these small perturbations could be imperceptible to human. As deep learning techniques have become the core part for many security-critical applications including identity recognition camera, malware detection software, self-driving cars, adversarial attacks have become one crucial security threat to many deep learning applications in real world. In this chapter, we first review some state-of-the-art adversarial attack techniques for deep learning frameworks in both white-box and black-box settings. We then discuss recent methods to defend against adversarial attacks on deep learning frameworks. Finally, we explore recent work applying adversarial attack techniques to some popular commercial deep learning applications, such as image classification, speech recognition and malware detection. These projects demonstrate that many commercial deep learning frameworks are vulnerable to malicious cyber security attacks.

Keywords

Adversarial learning Deep learning Cyber security 

References

  1. 1.
    Arjovsky M, Chintala S, Bottou L (2017) Wasserstein gan. CoRR, abs/1701.07875Google Scholar
  2. 2.
    Arp D, Spreitzenbarth M, Gascon H, Rieck K (2014) Drebin: effective and explainable detection of android malware in your pocket. In: Proceedings of the 21th Annual Network and Distributed System Security Symposium (NDSS’14)Google Scholar
  3. 3.
    Carlini N, Wagner DA (2016) Towards evaluating the robustness of neural networks. CoRR, abs/1608.04644Google Scholar
  4. 4.
    Cisse M, Adi Y, Neverova N, Keshet J (2017) Houdini: fooling deep structured visual and speech recognition models with adversarial examples. In: Neural Information Processing Systems (NIPS 2017)Google Scholar
  5. 5.
    Deng J, Dong W, Socher R, Li LJ, Li K, Li F (2009) Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp 248–255Google Scholar
  6. 6.
    Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Neural Information Processing Systems (NIPS 2014)Google Scholar
  7. 7.
    Goodfellow I, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples. In: International Conference on Learning RepresentationsGoogle Scholar
  8. 8.
    Grosse K, Papernot N, Manoharan P, Backes M, McDaniel PD (2016) Adversarial perturbations against deep neural networks for Malware classification. CoRR, abs/1606.04435Google Scholar
  9. 9.
    Hinton G, Vinyals O, Dean J (2014) Distilling the knowledge in a neural network. In: Deep Learning and Representation Learning Workshop at NIPS 2014Google Scholar
  10. 10.
    Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick R, Guadarrama S, Darrell T (2014) Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM International Conference on Multimedia, MM’14, New York. ACM, pp 675–678Google Scholar
  11. 11.
    Krizhevsky A (2009) Learning multiple layers of features from tiny images. Technical report, University of Toronto.Google Scholar
  12. 12.
    Krizhevsky A, Sutskever I, Hinton G (2012) Imagenet classification with deep convolutional neural networks. In: Neural Information Processing Systems (NIPS 2012)Google Scholar
  13. 13.
    Kurakin A, Goodfellow IJ, Bengio S (2016) Adversarial examples in the physical world. CoRR, abs/1607.02533Google Scholar
  14. 14.
    Le QV (2013) Building high-level features using large scale unsupervised learning. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp 8595–8598Google Scholar
  15. 15.
    LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436 EP –CrossRefGoogle Scholar
  16. 16.
    Lecun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324CrossRefGoogle Scholar
  17. 17.
    LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE, 86(11):2278–2324CrossRefGoogle Scholar
  18. 18.
    Liu W, Chawla S (2009) A game theoretical model for adversarial learning. In: Proceedings of the 2009 IEEE International Conference on Data Mining Workshops. IEEE Computer Society, pp 25–30Google Scholar
  19. 19.
    Liu W, Chawla S (2010) Mining adversarial patterns via regularized loss minimization. Mach Learn 81(1):69–83MathSciNetCrossRefGoogle Scholar
  20. 20.
    Meng D, Chen H (2017) Magnet: a two-pronged defense against adversarial examples. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS’17, New York. ACM, pp 135–147Google Scholar
  21. 21.
    Moosavi-Dezfooli S, Fawziand A, Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Google Scholar
  22. 22.
    Panayotov V, Chen G, Povey D, Khudanpur S (2015) Librispeech: an ASR corpus based on public domain audio books. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp 5206–5210Google Scholar
  23. 23.
    Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A (2017) Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, ASIA CCS’17, New York. ACM, pp 506–519CrossRefGoogle Scholar
  24. 24.
    Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In: 2016 IEEE European Symposium on Security and Privacy (EuroSP), pp 372–387Google Scholar
  25. 25.
    Nicolas Papernot, Patrick D. McDaniel, Xi Wu, Somesh Jha, Swami A (2015) Distillation as a defense to adversarial perturbations against deep neural networks. CoRR, abs/1511.04508Google Scholar
  26. 26.
    Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Li F (2015) ImageNet large scale visual recognition challenge. Int J Comput Vis (IJCV) 115(3):211–252MathSciNetCrossRefGoogle Scholar
  27. 27.
    Samangouei P, Kabkab M, Chellappa R (2018) Defense-gan: protecting classifiers against adversarial attacks using generative models. CoRR, abs/1805.06605Google Scholar
  28. 28.
    Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 1–9Google Scholar
  29. 29.
    Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Google Scholar
  30. 30.
    Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow IJ, Fergus R (2013) Intriguing properties of neural networks. CoRR, abs/1312.6199Google Scholar
  31. 31.
    Wang F, Liu W, Chawla S (2014) On sparse feature attacks in adversarial learning. In: Proceedings of 2014 IEEE International Conference on Data Mining (ICDM), pp 1013–1018Google Scholar
  32. 32.
    Yin Z, Wang F, Liu W, Chawla S (2018) Sparse feature attacks in adversarial learning. IEEE Trans Knowl Data Eng 30(6):1164–1177CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.School of Information TechnologiesUniversity of SydneySydneyAustralia
  2. 2.Advanced Analytics InstituteUniversity of Technology SydneySydneyAustralia
  3. 3.Qatar Computing Research InstituteDohaQatar

Personalised recommendations