Skip to main content

An Universal Perturbation Generator for Black-Box Attacks Against Object Detectors

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11910))

Abstract

With the continuous development of deep neural networks (DNNs), it has become the main means of solving problems in the field of computer vision. However, recent research has shown that deep neural networks are vulnerable to well-designed adversarial examples. In this paper, we used a deep neural network to generate adversarial examples to attack black-box object detectors. We trained a generation network to produce universal perturbations, achieving a cross-task attack against black-box object detectors. We demonstrated the feasibility of task-generalizable attacks. Our attack generated efficient universal perturbations on classifiers then attack object detectors. We proved the effectiveness of our attack on two representative object detectors: Faster R-CNN based on proposal and regression-based YOLOv3.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Brendel, W., Rauber, J., Bethge, M.: Decision-based adversarial attacks: reliable attacks against black-box machine learning models (2017)

    Google Scholar 

  2. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: Security & Privacy (2017)

    Google Scholar 

  3. Dong, Y., Liao, F., Pang, T., Hu, X., Zhu, J.: Discovering adversarial examples with momentum (2017)

    Google Scholar 

  4. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. Comput. Sci. (2014)

    Google Scholar 

  5. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world (2016)

    Google Scholar 

  6. Liu, Y., Chen, X., Chang, L., Song, D.: Delving into transferable adversarial examples and black-box attacks (2016)

    Google Scholar 

  7. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks (2017)

    Google Scholar 

  8. Moosavi-Dezfooli, S.M., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations (2017)

    Google Scholar 

  9. Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: Computer Vision & Pattern Recognition (2016)

    Google Scholar 

  10. Mopuri, K.R., Garg, U., Babu, R.V.: Fast feature fool: a data independent approach to universal adversarial perturbations (2017)

    Google Scholar 

  11. Papernot, N., Mcdaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: IEEE European Symposium on Security & Privacy (2016)

    Google Scholar 

  12. Su, J., Vargas, D.V., Kouichi, S.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. (2017)

    Google Scholar 

  13. Szegedy, C., et al.: Intriguing properties of neural networks. Comput. Sci. (2013)

    Google Scholar 

  14. Tramr, F., Papernot, N., Goodfellow, I., Dan, B., Mcdaniel, P.: The space of transferable adversarial examples (2017)

    Google Scholar 

Download references

Acknowledgment

This work is supported by National Natural Science Foundation of China (No. 61876019 & U1636213).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaosong Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhao, Y., Wang, K., Xue, Y., Zhang, Q., Zhang, X. (2019). An Universal Perturbation Generator for Black-Box Attacks Against Object Detectors. In: Qiu, M. (eds) Smart Computing and Communication. SmartCom 2019. Lecture Notes in Computer Science(), vol 11910. Springer, Cham. https://doi.org/10.1007/978-3-030-34139-8_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-34139-8_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-34138-1

  • Online ISBN: 978-3-030-34139-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics