We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

TranFuzz: An Ensemble Black-Box Attack Framework Based on Domain Adaptation and Fuzzing | SpringerLink
Skip to main content

TranFuzz: An Ensemble Black-Box Attack Framework Based on Domain Adaptation and Fuzzing

  • Conference paper
  • First Online:
Information and Communications Security (ICICS 2021)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 12918))

Included in the following conference series:

  • 1906 Accesses

Abstract

A lot of research effort has been done to investigate how to attack black-box neural networks. However, less attention has been paid to the challenge of data and neural networks all black-box. This paper fully considers the relationship between the challenges related to data black-box and model black-box and proposes an effective and efficient non-target attack framework, namely TranFuzz. On the one hand, TranFuzz introduces a domain adaptation-based method, which can reduce data difference between the local (or source) and target domains by leveraging sub-domain feature mapping. On the other hand, TranFuzz proposes a fuzzing-based method to generate imperceptible adversarial examples of high transferability. Experimental results indicate that the proposed method can achieve an attack success rate of more than 68% in a real-world CVS attack. Moreover, TranFuzz can also reinforce both the robustness (up to 3.3%) and precision (up to 5%) of the original neural network performance by taking advantage of the adversarial re-training.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/tensorflow/cleverhans/tree/master/examples/nips17_adv ersarial_competition/dataset.

  2. 2.

    https://github.com/lihaoSDU/ICICS2021.

  3. 3.

    https://github.com/jindongwang/transferlearning/tree/master/data.

  4. 4.

    https://vision.aliyun.com/imagerecog.

  5. 5.

    https://ai.baidu.com/tech/imagerecognition/general.

  6. 6.

    https://ai.qq.com/product/visionimgidy.shtml.

  7. 7.

    https://azure.microsoft.com/en-us/services/cognitive-services/computer-vision.

  8. 8.

    https://www.clarifai.com/label.

  9. 9.

    The Attack Success Rate is the proportion of adversarial examples misclassified by the target DDN [14].

References

  1. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR 2016, pp. 770–778 (2016)

    Google Scholar 

  2. Pei, K., Cao, Y., Yang, J., Jana, S.: DeepXplore: automated Whitebox testing of deep learning systems. In: SOSP 2017, pp. 1–18 (2017)

    Google Scholar 

  3. Xie, C., et al.: Improving transferability of adversarial examples with input diversity. In: CVPR 2019, pp. 2730–2739 (2019)

    Google Scholar 

  4. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. CoRR abs/1706.06083 (2017)

    Google Scholar 

  5. Bhagoji, A.N., He, W., Li, B., Song, D.: Exploring the space of black-box attacks on deep neural networks. In: European Conference on Computer Vision (2019)

    Google Scholar 

  6. Suya, F., Chi, J., Evans, D., Tian, Y.: Hybrid batch attacks: finding black-box adversarial examples with limited queries. In: USENIX Security Symposium 2020, pp. 1327–1344 (2020)

    Google Scholar 

  7. Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)

    Article  Google Scholar 

  8. Wang, M., Deng, W.: Deep visual domain adaptation: a survey. Neurocomputing 312, 135–153 (2018)

    Article  Google Scholar 

  9. Zhu, Y., Zhuang, F., Wang, J., et al.: Deep subdomain adaptation network for image classification. IEEE Trans. Neural Netw. Learn. Syst. 32, 1713–1722 (2020)

    Article  MathSciNet  Google Scholar 

  10. Long, M., Cao, Y., Wang, J., Jordan, M.I.: Learning transferable features with deep adaptation networks. In: ICML 2015, pp. 97–105 (2015)

    Google Scholar 

  11. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  12. Xiao, Q., Chen, Y., Shen, C., Chen, Y., Li, K.: Seeing is not believing: camouflage attacks on image scaling algorithms. In: USENIX Security Symposium 2019, pp. 443–460 (2019)

    Google Scholar 

  13. Hu, Q., Ma, L., Xie, X., Yu, B., Liu, Y., Zhao, J.: DeepMutation++: a mutation testing framework for deep learning systems. In: ASE 2019, pp. 1158–1161 (2019)

    Google Scholar 

  14. Rony, J., Hafemann, L.G., Oliveira, L.S., Ayed, I.B., Sabourin, R., Granger, E.: Decoupling direction and norm for efficient gradient-based L2 adversarial attacks and defenses. In: CVPR 2019, pp. 4322–4330 (2019)

    Google Scholar 

  15. Chen, P.-Y., Zhang, H., Sharma, Y., Yi, J., Hsieh, C.-J.: ZOO: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: AISec@CCS 2017, pp. 15–26 (2017)

    Google Scholar 

  16. Saenko, K., Kulis, B., Fritz, M., Darrell, T.: Adapting visual category models to new domains. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 213–226. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15561-1_16

    Chapter  Google Scholar 

  17. Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (Poster) (2014)

    Google Scholar 

  18. Carlini, N., Wagner, D.A.: Towards evaluating the robustness of neural networks. In: IEEE Symposium on Security and Privacy, pp. 39–57 (2017)

    Google Scholar 

  19. Venkateswara, H., Eusebio, J., Chakraborty, S., Panchanathan, S.: Deep hashing network for unsupervised domain adaptation. In: CVPR 2017, pp. 5385–5394 (2017)

    Google Scholar 

  20. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015)

    Google Scholar 

  21. Jiawei, S., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23(5), 828–841 (2019)

    Article  Google Scholar 

  22. Engstrom, L., Tran, B., Tsipras, D., Schmidt, L., Madry, A.: Exploring the landscape of spatial robustness. In: ICML 2019, pp. 1802–1811 (2019)

    Google Scholar 

  23. https://github.com/BorealisAI/advertorch

  24. https://github.com/Trusted-AI/adversarial-robustness-toolbox

  25. Wong, E., Rice, L., Zico Kolter, J.: Fast is better than free: revisiting adversarial training. In: ICLR 2020 (2020)

    Google Scholar 

Download references

Acknowledgment

This research was supported by National Natural Science Foundation of China (No. 62002203), Major Scientific and Technological Innovation Projects of Shandong Province, China (No. 2018CXGC0708, No. 2019JZZY010132), Shandong Provincial Natural Science Foundation (No. ZR2020MF055, No. ZR2020LZH002, No. ZR2020QF045), The Fundamental Research Funds of Shandong University (No. 2019GN095), The Open Project of Key Laboratory of Network Assessment Technology, Institute of information engineering, Chinese Academy of Sciences (No. KFKT2019-002).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shanqing Guo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, H., Guo, S., Tang, P., Hu, C., Chen, Z. (2021). TranFuzz: An Ensemble Black-Box Attack Framework Based on Domain Adaptation and Fuzzing. In: Gao, D., Li, Q., Guan, X., Liao, X. (eds) Information and Communications Security. ICICS 2021. Lecture Notes in Computer Science(), vol 12918. Springer, Cham. https://doi.org/10.1007/978-3-030-86890-1_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-86890-1_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-86889-5

  • Online ISBN: 978-3-030-86890-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics