Skip to main content

Synthesis of Positron Emission Tomography (PET) Images via Multi-channel Generative Adversarial Networks (GANs)

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 10555))

Abstract

Positron emission tomography (PET) imaging is widely used for staging and monitoring treatment in a variety of cancers including the lymphomas and lung cancer. Recently, there has been a marked increase in the accuracy and robustness of machine learning methods and their application to computer-aided diagnosis (CAD) systems, e.g., the automated detection and quantification of abnormalities in medical images. Successful machine learning methods require large amounts of training data and hence, synthesis of PET images could play an important role in enhancing training data and ultimately improve the accuracy of PET-based CAD systems. Existing approaches such as atlas-based or methods that are based on simulated or physical phantoms have problems in synthesizing the low resolution and low signal-to-noise ratios inherent in PET images. In addition, these methods usually have limited capacity to produce a variety of synthetic PET images with large anatomical and functional differences. Hence, we propose a new method to synthesize PET data via multi-channel generative adversarial networks (M-GAN) to address these limitations. Our M-GAN approach, in contrast to the existing medical image synthetic methods that rely on using low-level features, has the ability to capture feature representations with a high-level of semantic information based on the adversarial learning concept. Our M-GAN is also able to take the input from the annotation (label) to synthesize regions of high uptake e.g., tumors and from the computed tomography (CT) images to constrain the appearance consistency based on the CT derived anatomical information in a single framework and output the synthetic PET images directly. Our experimental data from 50 lung cancer PET-CT studies show that our method provides more realistic PET images compared to conventional GAN methods. Further, the PET tumor detection model, trained with our synthetic PET data, performed competitively when compared to the detection model trained with real PET data (2.79% lower in terms of recall). We suggest that our approach when used in combination with real and synthetic images, boosts the training data for machine learning methods.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Nestle, U., et al.: Comparison of different methods for delineation of 18F-FDG PET-positive tissue for target volume definition in radiotherapy of patients with non-small cell lung cancer. J. Nuclear Med. 46, 1342–1348 (2005)

    Google Scholar 

  2. Li, H., et al.: A novel PET tumor delineation method based on adaptive region-growing and dual-front active contours. Med. Phys. 35(8), 3711–3721 (2008)

    Article  Google Scholar 

  3. Hoh, C.K., et al.: Whole-body FDG-PET imaging for staging of Hodgkin’s disease and lymphoma. J. Nuclear Med. 38(3), 343 (1997)

    Google Scholar 

  4. Bi, L., et al.: Automatic detection and classification of regions of FDG uptake in whole-body PET-CT lymphoma studies. Comput. Med. Imaging Graph. (2016). http://dx.doi.org/10.1016/j.compmedimag.2016.11.008. ISSN 0895-6111

  5. Doi, K.: Computer-aided diagnosis in medical imaging: historical review, current status and future potential. Comput. Med. Imaging Graph. 31(4), 198–211 (2007)

    Article  Google Scholar 

  6. Gulshan, V., et al.: Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316, 2402–2410 (2016)

    Article  Google Scholar 

  7. Kooi, T., et al.: Large scale deep learning for computer aided detection of mammographic lesions. Med. Image Anal. 35, 303–312 (2017)

    Article  Google Scholar 

  8. Esteva, A., et al.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639), 115–118 (2017)

    Article  Google Scholar 

  9. Chen, H., et al.: DCAN: Deep contour-aware networks for object instance segmentation from histology images. Med. Image Anal. 36, 135–146 (2017)

    Article  Google Scholar 

  10. Bi, L., et al.: Dermoscopic image segmentation via multi-stage fully convolutional networks. IEEE Trans. Biomed. Eng. 64, 2065–2074 (2017)

    Article  Google Scholar 

  11. Mumcuoglu, E.U., et al.: Bayesian reconstruction of PET images: methodology and performance analysis. Phys. Med. Biol. 41, 1777 (1996)

    Article  Google Scholar 

  12. Burgos, N., et al.: Attenuation correction synthesis for hybrid PET-MR scanners: application to brain studies. IEEE Trans. Med. Imaging 12, 2332–2341 (2014)

    Article  Google Scholar 

  13. Goodfellow, I., et al.: Generative adversarial nets. In: Neural Information Processing Systems (2014)

    Google Scholar 

  14. van den Oord, A., et al.: Conditional image generation with pixelcnn decoders. In: Neural Information Processing Systems (2016)

    Google Scholar 

  15. Isola, P., et al.: Image-to-image translation with conditional adversarial networks. In: Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  16. Shrivastava, A., et al.: Learning from simulated and unsupervised images through adversarial training. arXiv preprint arXiv:1612.07828 (2016)

  17. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). doi:10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  18. Collobert, R., Bengio, S., Mariéthoz, J.: Torch: a modular machine learning software library. No. EPFL-REPORT-82802. Idiap (2002)

    Google Scholar 

  19. Gholipour, A., et al.: Robust super-resolution volume reconstruction from slice acquisitions: application to fetal brain MRI. IEEE Trans. Med. Imaging 29(10), 1739–1758 (2010)

    Article  Google Scholar 

  20. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Computer Vision and Pattern Recognition (2015)

    Google Scholar 

  21. Bi, L., et al.: Stacked fully convolutional networks with multi-channel learning: application to medical image segmentation. Vis. Comput. 33(6), 1061–1071 (2017)

    Article  Google Scholar 

  22. Kamnitsas, K., et al.: Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med. Image Anal. 36, 61–78 (2017)

    Article  Google Scholar 

  23. Song, Y., et al.: A multistage discriminative model for tumor and lymph node detection in thoracic images. IEEE Trans. Med. Imaging 31(5), 1061–1075 (2012)

    Article  Google Scholar 

  24. Kim, J., et al.: Use of anatomical priors in the segmentation of PET lung tumor images. In: Nuclear Science Symposium Conference Record, vol. 6, pp. 4242–4245 (2007)

    Google Scholar 

  25. Papadimitroulas, P., et al.: Investigation of realistic PET simulations incorporating tumor patient’s specificity using anthropomorphic models: creation of an oncology database. Med. Phys. 40(11), 112506-(1-13) (2013)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lei Bi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Bi, L., Kim, J., Kumar, A., Feng, D., Fulham, M. (2017). Synthesis of Positron Emission Tomography (PET) Images via Multi-channel Generative Adversarial Networks (GANs). In: Cardoso, M., et al. Molecular Imaging, Reconstruction and Analysis of Moving Body Organs, and Stroke Imaging and Treatment. RAMBO CMMI SWITCH 2017 2017 2017. Lecture Notes in Computer Science(), vol 10555. Springer, Cham. https://doi.org/10.1007/978-3-319-67564-0_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-67564-0_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-67563-3

  • Online ISBN: 978-3-319-67564-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics