Advertisement

Pathology Segmentation Using Distributional Differences to Images of Healthy Origin

  • Simon AndermattEmail author
  • Antal Horváth
  • Simon Pezold
  • Philippe Cattin
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11383)

Abstract

Fully supervised segmentation methods require a large training cohort of already segmented images, providing information at the pixel level of each image. We present a method to automatically segment and model pathologies in medical images, trained solely on data labelled on the image level as either healthy or containing a visual defect. We base our method on CycleGAN, an image-to-image translation technique, to translate images between the domains of healthy and pathological images. We extend the core idea with two key contributions. Implementing the generators as residual generators allows us to explicitly model the segmentation of the pathology. Realizing the translation from the healthy to the pathological domain using a variational autoencoder allows us to specify one representation of the pathology, as this transformation is otherwise not unique. Our model hence not only allows us to create pixelwise semantic segmentations, it is also able to create inpaintings for the segmentations to render the pathological image healthy. Furthermore, we can draw new unseen pathology samples from this model based on the distribution in the data. We show quantitatively, that our method is able to segment pathologies with a surprising accuracy being only slightly inferior to a state-of-the-art fully supervised method, although the latter has per-pixel rather than per-image training information. Moreover, we show qualitative results of both the segmentations and inpaintings. Our findings motivate further research into weakly-supervised segmentation using image level annotations, allowing for faster and cheaper acquisition of training data without a large sacrifice in segmentation accuracy.

Notes

Acknowledgements

We are grateful to the MIAC corporation for generously funding this work.

Supplementary material

479725_1_En_23_MOESM1_ESM.pdf (6 mb)
Supplementary material 1 (pdf 6166 KB)

References

  1. 1.
    Andermatt, S., Pezold, S., Cattin, P.: Multi-dimensional gated recurrent units for the segmentation of biomedical 3D-data. In: Carneiro, G., et al. (eds.) LABELS/DLMIA -2016. LNCS, vol. 10008, pp. 142–151. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46976-8_15CrossRefGoogle Scholar
  2. 2.
    Bakas, S., et al.: Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features. Nat. Sci. Data 4, 170117 (2017)CrossRefGoogle Scholar
  3. 3.
    Baumgartner, C.F., Koch, L.M., Tezcan, K.C., Ang, J.X., Konukoglu, E.: Visual feature attribution using wasserstein gans. arXiv preprint arXiv:1711.08998 (2017)
  4. 4.
    Chu, C., Zhmoginov, A., Sandler, M.: CycleGAN, a Master of Steganography. arXiv:1712.02950 [cs, stat], December 2017
  5. 5.
    Clevert, D.A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (ELUs). arXiv preprint arXiv:1511.07289 (2015)
  6. 6.
    Fu, C., et al.: Fluorescence microscopy image segmentation using convolutional neural network with generative adversarial networks. arXiv preprint arXiv:1801.07198 (2018)
  7. 7.
    Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)Google Scholar
  8. 8.
    Kingma, D.P., Welling, M.: Stochastic gradient VB and the variational auto-encoder. In: Second International Conference on Learning Representations (2014)Google Scholar
  9. 9.
    Menze, B.H., Jakab, A., Bauer, S., et al.: The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 34(10), 1993–2024 (2015)CrossRefGoogle Scholar
  10. 10.
    Chartsias, A., Joyce, T., Dharmakumar, R., Tsaftaris, S.A.: Adversarial image synthesis for unpaired multi-modal cardiac data. In: Tsaftaris, S.A., Gooya, A., Frangi, A.F., Prince, J.L. (eds.) SASHIMI 2017. LNCS, vol. 10557, pp. 3–13. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-68127-6_1CrossRefGoogle Scholar
  11. 11.
    Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks, pp. 2223–2232 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Simon Andermatt
    • 1
    Email author
  • Antal Horváth
    • 1
  • Simon Pezold
    • 1
  • Philippe Cattin
    • 1
  1. 1.Department of Biomedical EngineeringUniversity of BaselAllschwilSwitzerland

Personalised recommendations