Skip to main content

Exploring Adversarial Examples

Patterns of One-Pixel Attacks

  • Conference paper
  • First Online:
Book cover Understanding and Interpreting Machine Learning in Medical Image Computing Applications (MLCN 2018, DLF 2018, IMIMIC 2018)

Abstract

Failure cases of black-box deep learning, e.g. adversarial examples, might have severe consequences in healthcare. Yet such failures are mostly studied in the context of real-world images with calibrated attacks. To demystify the adversarial examples, rigorous studies need to be designed. Unfortunately, complexity of the medical images hinders such study design directly from the medical images. We hypothesize that adversarial examples might result from the incorrect mapping of image space to the low dimensional generation manifold by deep networks. To test the hypothesis, we simplify a complex medical problem namely pose estimation of surgical tools into its barest form. An analytical decision boundary and exhaustive search of the one-pixel attack across multiple image dimensions let us localize the regions of frequent successful one-pixel attacks at the image space.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Berkrot, B.: U.S. FDA approves AI device to detect diabetic eye disease, 11 April 2018. https://www.reuters.com/article/us-fda-ai-approval/u-s-fda-approves-ai-device-to-detect-diabetic-eye-disease-idUSKBN1HI2LC

  2. Eykholt, K., et al.: Robust physical-world attacks on deep learning models (2017). http://arxiv.org/pdf/1707.08945

  3. Finlayson, S.G., Chung, H.W., Kohane, I.S., Beam, A.L.: Adversarial attacks against medical deep learning systems (2018). http://arxiv.org/pdf/1804.05296

  4. Gilmer, J., et al.: Adversarial spheres (2018). http://arxiv.org/pdf/1801.02774

  5. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples (2014). http://arxiv.org/pdf/1412.6572

  6. Kügler, D., Stefanov, A., Mukhopadhyay, A.: i3PosNet: instrument pose estimation from X-Ray (2018). http://arxiv.org/pdf/1802.09575

  7. Walter, M.: FDA reclassification proposal could ease approval process for CAD software, 01 June 2018. https://www.healthimaging.com/topics/healthcare-economics/fda-reclassification-proposal-could-ease-approval-process-cad-software

  8. Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: high confidence predictions for unrecognizable images (2014). http://arxiv.org/pdf/1412.1897

  9. Rauber, J., Brendel, W., Bethge, M.: Foolbox: a python toolbox to benchmark the robustness of machine learning models (2017). http://arxiv.org/pdf/1707.04131

  10. Su, J., Vargas, D.V., Kouichi, S.: One pixel attack for fooling deep neural networks (2017). http://arxiv.org/pdf/1710.08864

  11. Szegedy, C., et al.: Intriguing properties of neural networks (2013). http://arxiv.org/pdf/1312.6199

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David Kügler .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kügler, D., Distergoft, A., Kuijper, A., Mukhopadhyay, A. (2018). Exploring Adversarial Examples. In: Stoyanov, D., et al. Understanding and Interpreting Machine Learning in Medical Image Computing Applications. MLCN DLF IMIMIC 2018 2018 2018. Lecture Notes in Computer Science(), vol 11038. Springer, Cham. https://doi.org/10.1007/978-3-030-02628-8_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-02628-8_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-02627-1

  • Online ISBN: 978-3-030-02628-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics