Skip to main content

Interactive Deep Editing Framework for Medical Image Segmentation

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 (MICCAI 2019)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 11766))

Abstract

Deep neural networks exhibit superior performance in dealing with segmentation of 3D medical images. However, the accuracy of segmentation results produced by fully automatic algorithms is insufficiently high due to the complexity of medical images; as such, further manual editing is required. To solve this problem, this paper proposes an interactive editing method combined with 3D end-to-end segmentation network. In the training stage, we simulate the user interactions, which are used as training data, by comparing the segmentation automatically generated by convolutional neural network with the ground truth. User interactions are fed into the network along with the images, allowing the network to adjust the segmentation results based on user edits. Our system provides three editing tools for smartly fixing segmentation errors, which cover most commonly used editing styles in medical image segmentation. With the high-level semantic information in the network, our method can efficiently and accurately edit the 3D segmentation. The interactive editing experiments on the BraTS dataset show that our method can significantly improve the segmentation accuracy with a small number of interactions only. The proposed method presents potential for clinical applications.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Boykov, Y.Y., Jolly, M.P.: Interactive graph cuts for optimal boundary & region segmentation of objects in ND images. In: ICCV, vol. 1, pp. 105–112. IEEE (2001)

    Google Scholar 

  2. Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_49

    Chapter  Google Scholar 

  3. Isensee, F., Kickingereder, P., Wick, W., Bendszus, M., Maier-Hein, K.H.: Brain Tumor segmentation and radiomics survival prediction: contribution to the BRATS 2017 challenge. In: Crimi, A., Bakas, S., Kuijf, H., Menze, B., Reyes, M. (eds.) BrainLes 2017. LNCS, vol. 10670, pp. 287–297. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-75238-9_25

    Chapter  Google Scholar 

  4. Kamnitsas, K., Ledig, C., et al.: Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. MIA 36, 61–78 (2017)

    Google Scholar 

  5. Liew, J., Wei, Y., Xiong, W., Ong, S.H., Feng, J.: Regional interactive image segmentation networks. In: ICCV, pp. 2746–2754. IEEE (2017)

    Google Scholar 

  6. Menze, B.H., Jakab, A., Bauer, S., et al.: The multimodal brain tumor image segmentation benchmark (BRATS). TMI 34(10), 1993–2024 (2015)

    Google Scholar 

  7. Peng, Y., Chen, L., Ou-Yang, F.X., Chen, W., Yong, J.H.: Jf-cut: A parallel graph cut approach for large-scale image and video. TIP 24(2), 655–666 (2014)

    Google Scholar 

  8. Wang, G., Li, W., Zuluaga, M.A., Pratt, R., et al.: Interactive medical image segmentation using deep learning with image-specific fine tuning. TMI 37(7), 1562–1573 (2018)

    Google Scholar 

  9. Wang, G., Zuluaga, M.A., Li, W., Pratt, R., et al.: DeepIGeoS: a deep interactive geodesic framework for medical image segmentation. TPAMI 41(7), 1559–1572 (2018)

    Article  Google Scholar 

  10. Xu, N., Price, B., Cohen, S., Yang, J., Huang, T.S.: Deep interactive object selection. In: CVPR, pp. 373–381 (2016)

    Google Scholar 

  11. Yang, L., Zhang, Y., Chen, J., Zhang, S., Chen, D.Z.: Suggestive annotation: a deep active learning framework for biomedical image segmentation. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 399–407. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66179-7_46

    Chapter  Google Scholar 

Download references

Acknowledgements

This research is partially supported by the National Key R&D Program of China (Grant No. 2017YFB1304301) and National Natural Science Foundation of China (Grant Nos. 61572274, 61972221).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Li Chen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhou, B., Chen, L., Wang, Z. (2019). Interactive Deep Editing Framework for Medical Image Segmentation. In: Shen, D., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. MICCAI 2019. Lecture Notes in Computer Science(), vol 11766. Springer, Cham. https://doi.org/10.1007/978-3-030-32248-9_37

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-32248-9_37

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-32247-2

  • Online ISBN: 978-3-030-32248-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics