Skip to main content

Advertisement

Log in

Next generation research applications for hybrid PET/MR and PET/CT imaging using deep learning

  • Review Article
  • Published:
European Journal of Nuclear Medicine and Molecular Imaging Aims and scope Submit manuscript

Abstract

Introduction

Recently there have been significant advances in the field of machine learning and artificial intelligence (AI) centered around imaging-based applications such as computer vision. In particular, the tremendous power of deep learning algorithms, primarily based on convolutional neural network strategies, is becoming increasingly apparent and has already had direct impact on the fields of radiology and nuclear medicine. While most early applications of computer vision to radiological imaging have focused on classification of images into disease categories, it is also possible to use these methods to improve image quality. Hybrid imaging approaches, such as PET/MRI and PET/CT, are ideal for applying these methods.

Methods

This review will give an overview of the application of AI to improve image quality for PET imaging directly and how the additional use of anatomic information from CT and MRI can lead to further benefits. For PET, these performance gains can be used to shorten imaging scan times, with improvement in patient comfort and motion artifacts, or to push towards lower radiotracer doses. It also opens the possibilities for dual tracer studies, more frequent follow-up examinations, and new imaging indications. How to assess quality and the potential effects of bias in training and testing sets will be discussed.

Conclusion

Harnessing the power of these new technologies to extract maximal information from hybrid PET imaging will open up new vistas for both research and clinical applications with associated benefits in patient care.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Krizhevsky A, Sutskever I, Hinton G. Imagenet classification with deep convolutional neural networks. In: NIPS'12 Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1. Lake Tahoe, Nevada. December 03–06, 2012; 1097–1105.

  2. Hinton G. Deep learning - a technology with the potential to transform health care. JAMA. 2018;320:1101–2.

    Article  Google Scholar 

  3. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436–44.

    Article  CAS  Google Scholar 

  4. Grewal PS, Oloumi F, Rubin U, Tennant MTS. Deep learning in ophthalmology: a review. Can J Ophthalmol. 2018;53:309–13.

    Article  Google Scholar 

  5. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542:115–8.

    Article  CAS  Google Scholar 

  6. Larson DB, Chen MC, Lungren MP, Halabi SS, Stence NV, Langlotz CP. Performance of a deep-learning neural network model in assessing skeletal maturity on pediatric hand radiographs. Radiology. 2018;287:313–22.

    Article  Google Scholar 

  7. Rajpurkar P, Irvin J, Ball RL, Zhu K, Yang B, Mehta H, et al. Deep learning for chest radiograph diagnosis: a retrospective comparison of the chexnext algorithm to practicing radiologists. PLoS Med. 2018;15:e1002686.

    Article  Google Scholar 

  8. Creative Destruction Lab. Geoff Hinton: on radiology [video file]. Retrieved from https://www.youtube.com/watch?v=2HMPRXstSvQ&t=2s. 2016. Accessed 28 Apr 2019.

  9. Zaharchuk G, Gong E, Wintermark M, Rubin D, Langlotz CP. Deep learning in neuroradiology. AJNR Am J Neuroradiol. 2018;39:1776–84.

    Article  CAS  Google Scholar 

  10. Fan A, Jahanian H, Holdsworth S, Zaharchuk G. Comparison of cerebral blood flow measurement with [15O]-water PET and arterial spin labeling MRI: a systematic review. J Cereb Blood Flow Metab. 2016;36:842–61.

    Article  CAS  Google Scholar 

  11. Chen H, Zhang Y, Kalra M, Lin F, Chen Y, Liao P, et al. Low-dose CT with a residual encoder-decoder convolutional neural network (RED-CNN). ArXiv e-prints; 2017. p. 1702.

  12. Ronnenberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. In: Proceedings of MICCAI, vol. 9351; 2015.

  13. Zhu B, Liu JZ, Cauley SF, Rosen BR, Rosen MS. Image reconstruction by domain-transform manifold learning. Nature. 2018;555:487–92.

    Article  CAS  Google Scholar 

  14. Haggstrom I, Schmidtlein CR, Campanella G, Fuchs TJ. DeepPET: a deep encoder-decoder network for directly solving the PET image reconstruction inverse problem. Med Image Anal. 2019;54:253–62.

    Article  Google Scholar 

  15. Mardani M, Gong E, Cheng J, Vasanawala S, Zaharchuk G, Xing L, et al. Deep generative adversarial networks for compressed sensing MRI. IEEE Trans Med Imaging. 2019;38:167–79.

    Article  Google Scholar 

  16. Hammernik K, Klatzer T, Kobler E, Recht MP, Sodickson DK, Pock T, et al. Learning a variational network for reconstruction of accelerated MRI data. Magn Reson Med. 2018;79:3055–71.

    Article  Google Scholar 

  17. Wagenknecht G, Kaiser HJ, Mottaghy FM, Herzog H. MRI for attenuation correction in PET: methods and challenges. MAGMA. 2013;26:99–113.

    Article  Google Scholar 

  18. Su Y, Rubin B, McConathy J, Laforest R, Qi J, Sharma A, et al. Impact of MR-based attenuation correction on neurologic PET studies. J Nucl Med. 2016;57:913–7.

    Article  CAS  Google Scholar 

  19. Wiesinger F, Bylund M, Yang J, Kaushik S, Shanbhag D, Ahn S, et al. Zero TE-based pseudo-CT image conversion in the head and its application in PET/MR attenuation correction and MR-guided radiation therapy planning. Magn Reson Med. 2018;80:1440–51.

    Article  Google Scholar 

  20. Leynes AP, Yang J, Shanbhag DD, Kaushik SS, Seo Y, Hope TA, et al. Hybrid ZTE/Dixon MR-based attenuation correction for quantitative uptake estimation of pelvic lesions in PET/MRI. Med Phys. 2017;44:902–13.

    Article  CAS  Google Scholar 

  21. Ladefoged CN, Benoit D, Law I, Holm S, Kjaer A, Hojgaard L, et al. Region specific optimization of continuous linear attenuation coefficients based on UTE (RESOLUTE): application to PET/MR brain imaging. Phys Med Biol. 2015;60:8047–65.

    Article  CAS  Google Scholar 

  22. Ladefoged CN, Law I, Anazodo U, St Lawrence K, Izquierdo-Garcia D, Catana C, et al. A multi-Centre evaluation of eleven clinically feasible brain PET/MRI attenuation correction techniques using a large cohort of patients. Neuroimage. 2017;147:346–59.

    Article  Google Scholar 

  23. Bradshaw TJ, Zhao G, Jang H, Liu F, McMillan AB. Feasibility of deep learning-based PET/MR attenuation correction in the pelvis using only diagnostic MR images. Tomography. 2018;4:138–47.

    Article  Google Scholar 

  24. Liu F, Jang H, Kijowski R, Bradshaw T, McMillan AB. Deep learning MR imaging-based attenuation correction for PET/MR imaging. Radiology. 2018;286:676–84.

    Article  Google Scholar 

  25. Leynes AP, Yang J, Wiesinger F, Kaushik SS, Shanbhag DD, Seo Y, et al. Zero-echo-time and Dixon deep pseudo-CT (ZEDD CT): direct generation of pseudo-CT images for pelvic PET/MRI attenuation correction using deep convolutional neural networks with multiparametric MRI. J Nucl Med. 2018;59:852–8.

    Article  Google Scholar 

  26. Gong K, Yang J, Kim K, El Fakhri G, Seo Y, Li Q. Attenuation correction for brain PET imaging using deep neural network based on Dixon and ZTE MR images. Phys Med Biol. 2018;63:125011.

    Article  Google Scholar 

  27. Han X. MR-based synthetic CT generation using a deep convolutional neural network method. Med Phys. 2017;44:1408–19.

    Article  CAS  Google Scholar 

  28. Liu F, Jang H, Kijowski R, Zhao G, Bradshaw T, McMillan AB. A deep learning approach for (18)F-FDG PET attenuation correction. EJNMMI Phys. 2018;5:24.

    Article  Google Scholar 

  29. Xiang L, Qiao Y, Nie D, An L, Wang Q, Shen D. Deep auto-context convolutional neural networks for standard-dose PET image estimation from low-dose PET/MRI. Neurocomputing. 2017;267:406–16.

    Article  Google Scholar 

  30. Xu J, Gong E, Pauly J, Zaharchuk G. 200x low-dose PET reconstruction using deep learning. ArXiV. 2017:1712.04119.

  31. Chen KT, Gong E, de Carvalho Macruz FB, Xu J, Boumis A, Khalighi M, et al. Ultra-low-dose (18)F-florbetaben amyloid PET imaging using deep learning with multi-contrast MRI inputs. Radiology. 2019;290:649–56.

    Article  Google Scholar 

  32. Ouyang J, Chen K, Gong E, Pauly J, Zaharchuk G. Ultra-low-dose PET reconstruction using generative adversarial network with feature mapping and task-specific perceptual loss. Med Phys. 2019 (in press).

  33. Guo J, Gong E, Fan A, Khalighi M, Zaharchuk G. Improving ASL CBF quantification using multi-contrast MRI and deep learning. In: Proceedings of American Society of Functional Neuroradiology; 2017.

  34. Guo J, Gong E, Goubran M, Fan A, Khalighi M, Zaharchuk G. Improving perfusion image quality and quantification accuracy using multi-contrast MRI and deep convolutional neural networks. In: Proceedings of ISMRM; 2018. p. 310.

  35. Gong E, Chen K, Guo J, Fan A, Pauly J, Zaharchuk G. Multi-tracer metabolic mapping from contrast-free MRI using deep learning. In: Proceedings of ISMRM workshop on machine learning; 2018.

  36. Wei W, Poirion E, Bodini B, Durrleman S, Ayache N, Stankoff B, et al. Learning myelin content in multiple sclerosis from multimodal MRI through adversarial training. In: Proceedings of MICCAI; 2018. p. 514–22.

Download references

Acknowledgements

Grant Support: NIH R01-EB025220.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Greg Zaharchuk.

Ethics declarations

Conflict of interest

Author GZ has received research support from GE Healthcare, Bayer Healthcare, and Nvidia. Author GZ is a co-founder of and holds an equity position in Subtle Medical, Inc.

Ethical approval

This article is a review and does not contain any studies with human participants or animals performed by any of the authors.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the Topical Collection on Advanced Image Analyses (Radiomics and Artificial Intelligence).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zaharchuk, G. Next generation research applications for hybrid PET/MR and PET/CT imaging using deep learning. Eur J Nucl Med Mol Imaging 46, 2700–2707 (2019). https://doi.org/10.1007/s00259-019-04374-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00259-019-04374-9

Keywords

Navigation