Advertisement

Anatomical Priors for Image Segmentation via Post-processing with Denoising Autoencoders

  • Agostina J. Larrazabal
  • Cesar Martinez
  • Enzo FerranteEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11769)

Abstract

Deep convolutional neural networks (CNN) proved to be highly accurate to perform anatomical segmentation of medical images. However, some of the most popular CNN architectures for image segmentation still rely on post-processing strategies (e.g. Conditional Random Fields) to incorporate connectivity constraints into the resulting masks. These post-processing steps are based on the assumption that objects are usually continuous and therefore nearby pixels should be assigned the same object label. Even if it is a valid assumption in general, these methods do not offer a straightforward way to incorporate more complex priors like convexity or arbitrary shape restrictions.

In this work we propose Post-DAE, a post-processing method based on denoising autoencoders (DAE) trained using only segmentation masks. We learn a low-dimensional space of anatomically plausible segmentations, and use it as a post-processing step to impose shape constraints on the resulting masks obtained with arbitrary segmentation methods. Our approach is independent of image modality and intensity information since it employs only segmentation masks for training. This enables the use of anatomical segmentations that do not need to be paired with intensity images, making the approach very flexible. Our experimental results on anatomical segmentation of X-ray images show that Post-DAE can improve the quality of noisy and incorrect segmentation masks obtained with a variety of standard methods, by bringing them back to a feasible space, with almost no extra computational time.

Keywords

Anatomical segmentation Autoencoders Convolutional neural networks Learning representations Post-processing 

Notes

Acknowledgments

EF is beneficiary of an AXA Research Fund grant. The authors gratefully acknowledge NVIDIA Corporation with the donation of the Titan Xp GPU used for this research, and the support of UNL (CAID-PIC-50420150100098LI) and ANPCyT (PICT 2016-0651).

Supplementary material

490281_1_En_65_MOESM1_ESM.pdf (103 kb)
Supplementary material 1 (pdf 102 KB)

References

  1. 1.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  2. 2.
    Kamnitsas, K., et al.: Efficient multi-scale 3d CNN with fully connected CRF for accurate brain lesion segmentation. Med. Image Anal. 36, 61–78 (2017)CrossRefGoogle Scholar
  3. 3.
    Shakeri, M., et al.: Sub-cortical brain structure segmentation using F-CNN’s. In: Proceedings of ISBI (2016)Google Scholar
  4. 4.
    BenTaieb, A., Hamarneh, G.: Topology aware fully convolutional networks for histology gland segmentation. In: Proceedings of MICCAI (2016)Google Scholar
  5. 5.
    Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)CrossRefGoogle Scholar
  6. 6.
    Oktay, O., et al.: Anatomically constrained neural networks (ACNNs): application to cardiac image enhancement and segmentation. IEEE TMI 37(2), 384–395 (2018)Google Scholar
  7. 7.
    Ravishankar, H., et al.: Learning and incorporating shape models for semantic segmentation. In: Proceedings of MICCAI (2017)CrossRefGoogle Scholar
  8. 8.
    Vincent, P., et al.: Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. JMLR 11, 3371–3408 (2010)MathSciNetzbMATHGoogle Scholar
  9. 9.
    Milletari, F., Navab, N., Ahmadi, S.A.: V-Net: fully convolutional neural networks for volumetric medical image segmentation. In: Proceedings of Fourth International Conference on 3D Vision (3DV) (2016)Google Scholar
  10. 10.
    Chapelle, O., Scholkopf, B., Zien, A.: Semi-Supervised Learning. MIT Press, Cambridge (2009)Google Scholar
  11. 11.
    Shiraishi, J., et al.: Development of a digital image database for chest radiographs with and without a lung nodule: receiver operating characteristic analysis of radiologists’ detection of pulmonary nodules. Am. J. Roentgenol. 174(1), 71–74 (2000)CrossRefGoogle Scholar
  12. 12.
    Krähenbühl, P., Koltun, V.: Efficient inference in fully connected CRFs with Gaussian edge potentials. In: Proceedings of Nips (2011)Google Scholar
  13. 13.
    Haralick, R.M., Shanmugam, K., et al.: Textural features for image classification. IEEE Trans. Syst. Man Cybern. 6, 610–621 (1973)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Agostina J. Larrazabal
    • 1
  • Cesar Martinez
    • 1
  • Enzo Ferrante
    • 1
    Email author
  1. 1.Research Institute for Signals, Systems and Computational Intelligence, sinc(i), FICH-UNL/CONICETSanta FeArgentina

Personalised recommendations