Skip to main content

Flexible Conditional Image Generation of Missing Data with Learned Mental Maps

  • Conference paper
  • First Online:
  • 1681 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 11905))

Abstract

Real-world settings often do not allow acquisition of high-resolution volumetric images for accurate morphological assessment and diagnostic. In clinical practice it is frequently common to acquire only sparse data (e.g. individual slices) for initial diagnostic decision making. Thereby, physicians rely on their prior knowledge (or mental maps) of the human anatomy to extrapolate the underlying 3D information. Accurate mental maps require years of anatomy training, which in the first instance relies on normative learning, i.e. excluding pathology. In this paper, we leverage Bayesian Deep Learning and environment mapping to generate full volumetric anatomy representations from none to a small, sparse set of slices. We evaluate proof of concept implementations based on Generative Query Networks (GQN) and Conditional BRUNO using abdominal CT and brain MRI as well as in a clinical application involving sparse, motion-corrupted MR acquisition for fetal imaging. Our approach allows to reconstruct 3D volumes from 1 to 4 tomographic slices, with a SSIM of 0.7+ and cross-correlation of 0.8+ compared to the 3D ground truth.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Cerrolaza, J.J., et al.: 3D fetal skull reconstruction from 2DUS via deep conditional generative networks. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 383–391. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_44

    Chapter  Google Scholar 

  2. Choy, C.B., Xu, D., Gwak, J.Y., Chen, K., Savarese, S.: 3D-R2N2: a unified approach for single and multi-view 3D object reconstruction. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 628–644. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46484-8_38

    Chapter  Google Scholar 

  3. Ding, F., Leow, W.K., Wang, S.-C.: Segmentation of 3D CT volume images using a single 2D atlas. In: Liu, Y., Jiang, T., Zhang, C. (eds.) CVBIA 2005. LNCS, vol. 3765, pp. 459–468. Springer, Heidelberg (2005). https://doi.org/10.1007/11569541_46

    Chapter  Google Scholar 

  4. Dinh, L., et al.: Density estimation using real NVP. CoRR abs/1605.08803 (2016)

    Google Scholar 

  5. Ehlke, M., et al.: Fast generation of virtual X-ray images for reconstruction of 3D anatomy. IEEE Trans. Vis. Comput. Graph. 19(12), 2673–2682 (2013)

    Article  Google Scholar 

  6. Eslami, S.M.A., et al.: Neural scene representation and rendering. Science 360(6394), 1204–1210 (2018). https://science.sciencemag.org/content/360/6394/1204

    Article  Google Scholar 

  7. Goodfellow, I., et al.: Generative adversarial nets. In: NIPS. pp. 2672–2680 (2014)

    Google Scholar 

  8. Groth, O.: ogroth/tf-gqn, June 2019. https://github.com/ogroth/tf-gqn

  9. Kainz, B., et al.: Fast volume reconstruction from motion corrupted stacks of 2D slices. IEEE Trans. Med. Imaging 34(9), 1901–1913 (2015)

    Article  Google Scholar 

  10. Kingma, D.P., et al.: Auto-encoding variational bayes. CoRR abs/1312.6114 (2013)

    Google Scholar 

  11. Korshunova, I., et al.: Conditional BRUNO: a deep recurrent process for exchangeable labelled data. In: Bayesian Deep Learning NeurIPS Workshop (2018)

    Google Scholar 

  12. Kunter, M., et al.: Unsupervised object segmentation for 2D to 3D conversion. In: Stereoscopic Displays and Applications XX, vol. 7237, p. 72371B (2009)

    Google Scholar 

  13. Papamakarios, G., Pavlakou, T., Murray, I.: Masked autoregressive flow for density estimation. In: NIPS, pp. 2338–2347 (2017)

    Google Scholar 

  14. Rezende, D.J., et al.: Variational inference with normalizing flows. In: ICML. JMLR Workshop and Conference Proceedings, vol. 37, pp. 1530–1538. JMLR.org (2015)

    Google Scholar 

  15. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  16. Sohn, K., et al.: Learning structured output representation using deep conditional generative models. In: NeurIPS, pp. 3483–3491 (2015)

    Google Scholar 

  17. Wang, L., et al.: Unsupervised 3D reconstruction from a single image via adversarial learning. CoRR abs/1711.09312 (2017)

    Google Scholar 

  18. Zaheer, M., et al.: Deep Sets. In: NeurIPS, pp. 3394–3404 (2017)

    Google Scholar 

Download references

Acknowledgements

We thank The Wellcome Trust IEH Award iFind project [102431], Innovate UK: London Medical Imaging & Artificial Intelligence Centre for Value-Based Healthcare [104691], and NVIDIA for their GPU donations. The data used in the preparation of this article were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (http://adni.loni.usc.edu). Fetal brain data were accessed only with informed consent, subject to approval and formal Data Sharing Agreement. We also like to thank Ira, author of BRUNO and Conditional BRUNO, for the valuable discussions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Benjamin Hou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hou, B., Vlontzos, A., Alansary, A., Rueckert, D., Kainz, B. (2019). Flexible Conditional Image Generation of Missing Data with Learned Mental Maps. In: Knoll, F., Maier, A., Rueckert, D., Ye, J. (eds) Machine Learning for Medical Image Reconstruction. MLMIR 2019. Lecture Notes in Computer Science(), vol 11905. Springer, Cham. https://doi.org/10.1007/978-3-030-33843-5_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-33843-5_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-33842-8

  • Online ISBN: 978-3-030-33843-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics