Skip to main content

Defence Against Adversarial Attacks Using Clustering Algorithm

  • Conference paper
  • First Online:
Data Science (ICPCSEE 2019)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1058))

Abstract

Deep learning model is vulnerable to adversarial examples in the task of image classification. In this paper, a cluster-based method for defending against adversarial examples is proposed. Each adversarial example before attacking a classifier is reconstructed by a clustering algorithm according to the pixel values. The MNIST database of handwritten digits was used to assess the defence performance of the method under the fast gradient sign method (FGSM) and the DeepFool algorithm. The defence model proposed is simple and the trained classifier does not need to be retrained.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bose, A.J., Aarabi, P.: Adversarial attacks on face detectors using neural net based constrained optimization. In: IEEE International Workshop on Multimedia Signal Processing, Vancouver, BC, Canada, 29–31 August 2018. https://doi.org/10.1109/MMSP.2018.8547128

  2. Eykholt, K., Evtimov, I., Fernandes, E., et. al.: Robust physical-world attacks on deep learning visual classification. In: IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018, pp. 1625–1634 (2018)

    Google Scholar 

  3. Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., et. al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, Montreal, Quebec, Canada, 8–13 December 2014, pp. 2672–2680 (2014). http://papers.nips.cc/paper/5423-generative-adversarial-nets

  4. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015). http://arxiv.org/abs/1412.6572

  5. Guo, C., Rana, M., Cissé, M., et. al.: Countering adversarial images using input transformations. In: International Conference on Learning Representations (2018). https://openreview.net/forum?id=SyJ7ClWCb

  6. Lecun, Y., Bottou, L., Bengio, Y., et. al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  7. LeCun, Y., Cortes, C., Burges, C.J.C.: The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/

  8. Liang, B., Li, H., Su, M., et. al.: Detecting adversarial image examples in deep neural networks with adaptive noise reduction. IEEE Trans. Dependable Secure Comput. (2018). https://doi.org/10.1109/TDSC.2018.2874243

  9. Liao, F., Liang, M., Dong, Y., et. al.: Defense against adversarial attacks using high-level representation guided denoiser. In: IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018, pp. 1778–1787 (2018). https://doi.org/10.1109/CVPR.2018.00191

  10. Ma, X., Li, B., Wang, Y., et. al.: Characterizing adversarial subspaces using local intrinsic dimensionality. In: International Conference on Learning Representations (2018). https://openreview.net/forum?id=B1gJ1L2aW

  11. MacQueen, J.B.: Some methods for classification and analysis of multivariate observations. In: Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, pp. 281–297. University of California Press (1967)

    Google Scholar 

  12. McCoyd, M., Wagner, D.A.: Background class defense against adversarial examples. In: IEEE Security and Privacy Workshops, San Francisco, CA, USA, 24 May 2018, pp. 96–102 (2018). https://doi.org/10.1109/SPW.2018.00023

  13. Moosavi-Dezfooli, S., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, 27–30 June 2016, pp. 2574–2582 (2016). https://doi.org/10.1109/CVPR.2016.282

  14. Samangouei, P., Kabkab, M., Chellappa, R.: Defense-GAN: protecting classifiers against adversarial attacks using generative models. In: International Conference on Learning Representations (2018). https://openreview.net/forum?id=BkJ3ibb0-

  15. Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M.K.: Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In: Proceedings of ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, 24–28 October, 2016, pp. 1528–1540 (2016). https://doi.org/10.1145/2976749.2978392

  16. Xu, W., Evans, D., Qi, Y.: Feature squeezing: detecting adversarial examples in deep neural networks. In: 25th Annual Network and Distributed System Security Symposium, San Diego, California, USA, 18–21 February 2018 (2018). http://wp.internetsociety.org/ndss/wp-content/uploads/sites/25/2018/02/ndss2018_03A-4_Xu_paper.pdf

Download references

Acknowledgment

We would like to thank Zhichao Xia, Zhi Guo, Xiaodong Mu, Bixia Liu and Professor Yimin Wen for their helpful suggestions. The work was partially supported by the National NSF of China (61602125, 61772150, 61862011, 61862012), the China Postdoctoral Science Foundation (2018M633041), the NSF of Guangxi (2016GXNSFBA380153, 2017GXNSFAA198192, 2018GXNSFAA138116, 2018-GXNSFAA281232, 2018GXNSFDA281054), the Guangxi Science and Technology Plan Project (AD18281065), the Guangxi Key R&D Program (AB17195025), the Guangxi Key Laboratory of Cryptography and Information Security (GCIS201625, GCIS201704), the National Cryptography Development Fund of China (MMJJ20170217), the research start-up grants of Dongguan University of Technology, and the Postgraduate Education Innovation Project of Guilin University of Electronic Technology (2018YJCX51, 2019YCXS052).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wenfen Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zheng, Y., Yun, H., Wang, F., Ding, Y., Huang, Y., Liu, W. (2019). Defence Against Adversarial Attacks Using Clustering Algorithm. In: Cheng, X., Jing, W., Song, X., Lu, Z. (eds) Data Science. ICPCSEE 2019. Communications in Computer and Information Science, vol 1058. Springer, Singapore. https://doi.org/10.1007/978-981-15-0118-0_25

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-0118-0_25

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-0117-3

  • Online ISBN: 978-981-15-0118-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics