Skip to main content

Disruption of Object Recognition Systems

  • Conference paper
  • First Online:
Intelligent Computing, Information and Control Systems (ICICCS 2019)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 1039))

  • 1125 Accesses

Abstract

In recent times, deep neural networks are being used in a wide variety of applications such as autonomous vehicles, medical imaging and surveillance. While they are becoming increasingly powerful, it is possible to disrupt their task by crafting adversarial inputs. These inputs are essentially perturbations added to the original inputs so that the application using the network, such as an object recognizer, is unable to classify the object in the image. Crafting such inputs to disrupt such a recognition task is termed an adversarial attack. Here, we implement two disruption strategies, Fast Gradient Sign Method (FGSM) and generating perturbations using a generator network. While FGSM requires access to the gradient calculated by the classifier with respect to the input image, the generator trains simultaneously with the classifier network to learn how to craft perturbations. Once the generator network is trained with a particular classifier (say, VGG16), it can disrupt other classifier networks in a black-box fashion as well. Using the same dataset, in this case CIFAR-10, it is possible to adversarially train the classifier to make it more robust to perturbed images. This involves training the classifier on the CIFAR-10 images with both the original images and the ones perturbed by the generator. In experiments, the attack using the generator achieves higher disruption accuracies than FGSM on very deep networks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  2. Crafting Adversarial Attacks with Adversarial Transformations. https://github.com/kawine/atgan

  3. Lee, H., Han, S., Lee, J.: Generative adversarial trainer: defense to adversarial perturbations with GAN. arXiv preprint arXiv:1705.03387 (2017)

  4. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)

  5. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in Neural İnformation Processing Systems, pp. 2672–2680 (2014)

    Google Scholar 

  6. Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506–519. ACM (2017)

    Google Scholar 

  7. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  8. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  9. Baluja, S., Fischer, I.: Adversarial transformation networks: learning to generate adversarial examples. arXiv preprint arXiv:1703.09387 (2017)

  10. Kingma, D., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  11. Krizhevsky, A., Nair, V., Hinton, G.: The cifar-10 dataset (2014). http://www.cs.toronto.edu/kriz/cifar.html

  12. Shen, S., Jin, G., Gao, K., Zhang, Y.: APE-GAN: adversarial perturbation elimination with GAN. arXiv:1707.05474 [cs.CV] (2017)

  13. Computer vision. https://en.wikipedia.org/wiki/Computer_vision#Applications

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Utsav Das , Aman Gupta , Onkar Singh Bagga or Manoj Sabnis .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Das, U., Gupta, A., Bagga, O.S., Sabnis, M. (2020). Disruption of Object Recognition Systems. In: Pandian, A., Ntalianis, K., Palanisamy, R. (eds) Intelligent Computing, Information and Control Systems. ICICCS 2019. Advances in Intelligent Systems and Computing, vol 1039. Springer, Cham. https://doi.org/10.1007/978-3-030-30465-2_56

Download citation

Publish with us

Policies and ethics