Skip to main content

A Gradient-Based Algorithm to Deceive Deep Neural Networks

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2019)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1142))

Included in the following conference series:

Abstract

Deep neural networks have achieved high performance in a variety of image recognition tasks. However, it is reported that the performance on image recognition of these networks is unstable to slight perturbations of images. To verify this weakness, we propose DeceiveDeep, a gradient-based algorithm for deceiving deep neural networks in this paper. There exists a lot of gradient-based attack methods, such as the L-BFGS, FGSM, and Deepfool. Specifically, based on an original method, L-BFGS, we exploit the Euclid norm of the gradient to update the space vector in an image to generate a deceivable image for fooling deep neural networks. We construct three types of deep neural network models and one convolutional neural network for testing the proposed algorithm. Based on the MNIST dataset and the Fashion-MNIST dataset, we evaluate the effectiveness of DeceiveDeep in terms of accuracy on training and testing data, and CNN model, respectively. The experimental results show that, comparing with L-BFGS, DeceiveDeep dramatically decreases the accuracy of the deep models on image recognition.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://yann.lecun.com/exdb/mnist/.

  2. 2.

    https://github.com/zalandoresearch/fashion-mnist.

References

  1. Xie, T., Li, Y.: Efficient integer vector homomorphic encryption using deep learning for neural networks. In: Cheng, L., Leung, A.C.S., Ozawa, S. (eds.) ICONIP 2018. LNCS, vol. 11301, pp. 83–95. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-04167-0_8

    Chapter  Google Scholar 

  2. Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint, arXiv:1312.6199 (2013)

  3. Bengio, Y.: Learning deep architectures for AI. Found. Trends® Mach. Learn. 2, 1–127 (2009). https://doi.org/10.1561/2200000006

    Article  MathSciNet  Google Scholar 

  4. Hinton, G.E.: Learning multiple layers of representation. Trends Cogn. Sci. 11, 428–434 (2007). https://doi.org/10.1016/j.tics.2007.09.004

    Article  Google Scholar 

  5. Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: attacks and defenses for deep learning. IEEE Trans. Neural Netw. Learn. Syst. 30, 2805–2824 (2019). https://doi.org/10.1109/TNNLS.2018.2886017

    Article  MathSciNet  Google Scholar 

  6. Felzenszwalb, P., McAllester, D., Ramanan, D.: A discriminatively trained, multiscale, deformable part model. In: 2008 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8. IEEE Press, New York (2008). https://doi.org/10.1109/CVPR.2008.4587597

  7. Floreano, D., Mattiussi, C.: Bio-inspired Artificial Intelligence: Theories, Methods, and Technologies. MIT Press, Cambridge (2008)

    Google Scholar 

  8. Cully, A., Clune, J., Tarapore, D., Mouret, J.B.: Robots that can adapt like animals. Nature 521, 503–507 (2015). https://doi.org/10.1038/nature14422

    Article  Google Scholar 

  9. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: 2014 IEEE Conference on CVPR, pp. 580–587. IEEE Press, New York (2014). https://doi.org/10.1109/CVPR.2014.81

  10. Goodfellow, I., Lee, H., Le, Q.V., Andrew, Y.N.: Measuring invariances in deep networks. In: Proceedings of the 22nd International Conference on NIPS, pp. 646–654. ACM (2009). https://doi.org/10.5555/2984093.2984166

  11. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53

    Chapter  Google Scholar 

  12. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint, arXiv:1301.3781 (2013)

  13. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Commun. ACM 60, 84–90 (2012). https://doi.org/10.1145/3065386

    Article  Google Scholar 

  14. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998). https://doi.org/10.1109/5.726791

    Article  Google Scholar 

  15. Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In: CVPR, pp. 427–436. IEEE Press (2015).https://doi.org/10.1109/CVPR.2015.7298640

  16. Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint, arXiv:1312.6034 (2013)

  17. Luo, C., Li, Z., Huang, K., Feng, J., Wang, M.: Zero-shot learning via attribute regression and class prototype rectification. IEEE Trans. Image Process. 27, 637–648 (2018). https://doi.org/10.1109/TIP.2017.2745109

    Article  MathSciNet  MATH  Google Scholar 

  18. Hu, G., Peng, X., Yang, Y., Hospedales, T.M., Verbeek, J.: Frankenstein: learning deep face representations using small data. IEEE Trans. Image Process. 27, 293–303 (2018). https://doi.org/10.1109/TIP.2017.2756450

    Article  MathSciNet  MATH  Google Scholar 

  19. Zhou, H., Wornell, G.: Efficient homomorphic encryption on integer vectors and its applications. In: 2014 Information Theory and Applications Workshop, pp. 1–9. IEEE Press, New York (2014). https://doi.org/10.1109/ITA.2014.6804228

  20. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint, arXiv:1412.6572 (2014)

  21. Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on CVPR, pp. 2574–2582. IEEE (2016). https://doi.org/10.1109/CVPR.2016.282

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yantao Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xie, T., Li, Y. (2019). A Gradient-Based Algorithm to Deceive Deep Neural Networks. In: Gedeon, T., Wong, K., Lee, M. (eds) Neural Information Processing. ICONIP 2019. Communications in Computer and Information Science, vol 1142. Springer, Cham. https://doi.org/10.1007/978-3-030-36808-1_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-36808-1_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-36807-4

  • Online ISBN: 978-3-030-36808-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics