Abstract
With the continuous development of deep neural networks (DNNs), it has become the main means of solving problems in the field of computer vision. However, recent research has shown that deep neural networks are vulnerable to well-designed adversarial examples. In this paper, we used a deep neural network to generate adversarial examples to attack black-box object detectors. We trained a generation network to produce universal perturbations, achieving a cross-task attack against black-box object detectors. We demonstrated the feasibility of task-generalizable attacks. Our attack generated efficient universal perturbations on classifiers then attack object detectors. We proved the effectiveness of our attack on two representative object detectors: Faster R-CNN based on proposal and regression-based YOLOv3.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Brendel, W., Rauber, J., Bethge, M.: Decision-based adversarial attacks: reliable attacks against black-box machine learning models (2017)
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: Security & Privacy (2017)
Dong, Y., Liao, F., Pang, T., Hu, X., Zhu, J.: Discovering adversarial examples with momentum (2017)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. Comput. Sci. (2014)
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world (2016)
Liu, Y., Chen, X., Chang, L., Song, D.: Delving into transferable adversarial examples and black-box attacks (2016)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks (2017)
Moosavi-Dezfooli, S.M., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations (2017)
Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: Computer Vision & Pattern Recognition (2016)
Mopuri, K.R., Garg, U., Babu, R.V.: Fast feature fool: a data independent approach to universal adversarial perturbations (2017)
Papernot, N., Mcdaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: IEEE European Symposium on Security & Privacy (2016)
Su, J., Vargas, D.V., Kouichi, S.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. (2017)
Szegedy, C., et al.: Intriguing properties of neural networks. Comput. Sci. (2013)
Tramr, F., Papernot, N., Goodfellow, I., Dan, B., Mcdaniel, P.: The space of transferable adversarial examples (2017)
Acknowledgment
This work is supported by National Natural Science Foundation of China (No. 61876019 & U1636213).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhao, Y., Wang, K., Xue, Y., Zhang, Q., Zhang, X. (2019). An Universal Perturbation Generator for Black-Box Attacks Against Object Detectors. In: Qiu, M. (eds) Smart Computing and Communication. SmartCom 2019. Lecture Notes in Computer Science(), vol 11910. Springer, Cham. https://doi.org/10.1007/978-3-030-34139-8_7
Download citation
DOI: https://doi.org/10.1007/978-3-030-34139-8_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-34138-1
Online ISBN: 978-3-030-34139-8
eBook Packages: Computer ScienceComputer Science (R0)