Advertisement

Iterative Interaction Training for Segmentation Editing Networks

  • Gustav Bredell
  • Christine Tanner
  • Ender Konukoglu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11046)

Abstract

Automatic segmentation has great potential to facilitate morphological measurements while simultaneously increasing efficiency. Nevertheless often users want to edit the segmentation to their own needs and will need different tools for this. There has been methods developed to edit segmentations of automatic methods based on the user input, primarily for binary segmentations. Here however, we present an unique training strategy for convolutional neural networks (CNNs) trained on top of an automatic method to enable interactive segmentation editing that is not limited to binary segmentation. By utilizing a robot-user during training, we closely mimic realistic use cases to achieve optimal editing performance. In addition, we show that an increase of the iterative interactions during the training process up to ten improves the segmentation editing performance substantially. Furthermore, we compare our segmentation editing CNN (interCNN) to state-of-the-art interactive segmentation algorithms and show a superior or on par performance.

Notes

Acknowledgements

We thank the Swiss Data Science Center (project C17-04 deepMICROIA) for funding and acknowledge NVIDIA for GPU support.

References

  1. 1.
    Amrehn, M., et al.: UI-Net: Interactive artificial neural networks for iterative image segmentation based on a user model. arXiv:1709.03450 (2017)
  2. 2.
    Bloch, N., Madabhushi, A., Huisman, H., et al.: NCI-ISBI 2013 challenge: automated segmentation of prostate structures. The Cancer Imaging Archive (2015)Google Scholar
  3. 3.
    Criminisi, A., Sharp, T., Blake, A.: GeoS: geodesic image segmentation. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5302, pp. 99–112. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-88682-2_9CrossRefGoogle Scholar
  4. 4.
    van Ginneken, B., Kerkstra, S., Litjens, G., Toth, R.: PROMISE12 challenge results (2018). https://promise12.grand-challenge.org/evaluation/results/
  5. 5.
    Grady, L., Schiwietz, T., Aharon, S., Westermann, R.: Random walks for interactive organ segmentation in two and three dimensions: implementation and validation. In: Duncan, J.S., Gerig, G. (eds.) MICCAI 2005. LNCS, vol. 3750, pp. 773–780. Springer, Heidelberg (2005).  https://doi.org/10.1007/11566489_95CrossRefGoogle Scholar
  6. 6.
    Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv:1502.03167 (2015)
  7. 7.
    Litjens, G., et al.: A survey on deep learning in medical image analysis. Med. Image Anal. 42, 60–88 (2017)CrossRefGoogle Scholar
  8. 8.
    Litjens, G., et al.: Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge. Med. Image Anal. 18(2), 359–373 (2014)CrossRefGoogle Scholar
  9. 9.
    Mahadevan, S., Voigtlaender, P., Leibe, B.: Iteratively trained interactive segmentation. arXiv:1805.04398 (2018)
  10. 10.
    Nickisch, H., Rother, C., Kohli, P., Rhemann, C.: Learning an interactive segmentation system. In: Indian Conference on Computer Vision, Graphics and Image Processing, pp. 274–281. ACM (2010)Google Scholar
  11. 11.
    Pasquier, D., Lacornerie, T., Vermandel, M., Rousseau, J., Lartigau, E., Betrouni, N., et al.: Automatic segmentation of pelvic structures from magnetic resonance images for prostate cancer radiotherapy. Int. J. Radiat. Oncol. 68(2), 592–600 (2007)CrossRefGoogle Scholar
  12. 12.
    Paszke, A., et al.: Automatic differentiation in pytorch. In: NIPS-W (2017)Google Scholar
  13. 13.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  14. 14.
    Rother, C., Kolmogorov, V., Blake, A.: GrabCut: interactive foreground extraction using iterated graph cuts. In: ACM Transactions on Graphics (TOG), vol. 23, pp. 309–314. ACM (2004)Google Scholar
  15. 15.
    Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetzbMATHGoogle Scholar
  16. 16.
    Tian, Z., Liu, L., Zhang, Z., Fei, B.: PSNet: prostate segmentation on MRI based on a convolutional neural network. J. Med. Imaging 5(2), 021208 (2018)CrossRefGoogle Scholar
  17. 17.
    Toth, R., et al.: Accurate prostate volume estimation using multifeature active shape models on T2-weighted MRI. Acad. Radiol. 18(6), 745–754 (2011)CrossRefGoogle Scholar
  18. 18.
    Vos, P., Barentsz, J., Karssemeijer, N., Huisman, H.: Automatic computer-aided detection of prostate cancer based on multiparametric magnetic resonance image analysis. Phys. Med. Biol. 57(6), 1527 (2012)CrossRefGoogle Scholar
  19. 19.
    Wang, G., Li, W., Zuluaga, M.A., Pratt, R., Patel, P.A., Aertsen, M., et al.: Interactive medical image segmentation using deep learning with image-specific fine-tuning. IEEE Trans. Med. Imaging (2018)Google Scholar
  20. 20.
    Wang, G., et al.: DeepIGeoS: a deep interactive geodesic framework for medical image segmentation. IEEE Trans. Pattern Anal. (2018)Google Scholar
  21. 21.
    Zhu, Q., Du, B., Turkbey, B., Choyke, P.L., Yan, P.: Deeply-supervised CNN for prostate segmentation. In: International Joint Conference on Neural Networks, pp. 178–184. IEEE (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Gustav Bredell
    • 1
  • Christine Tanner
    • 1
  • Ender Konukoglu
    • 1
  1. 1.Computer Vision LaboratoryETH ZurichZurichSwitzerland

Personalised recommendations