Advertisement

Synthesis of CT images from digital body phantoms using CycleGAN

  • Tom RussEmail author
  • Stephan Goerttler
  • Alena-Kathrin Schnurr
  • Dominik F. Bauer
  • Sepideh Hatamikia
  • Lothar R. Schad
  • Frank G. Zöllner
  • Khanlian Chung
Original Article

Abstract

Purpose

The potential of medical image analysis with neural networks is limited by the restricted availability of extensive data sets. The incorporation of synthetic training data is one approach to bypass this shortcoming, as synthetic data offer accurate annotations and unlimited data size.

Methods

We evaluated eleven CycleGAN for the synthesis of computed tomography (CT) images based on XCAT body phantoms. The image quality was assessed in terms of anatomical accuracy and realistic noise properties. We performed two studies exploring various network and training configurations as well as a task-based adaption of the corresponding loss function.

Results

The CycleGAN using the Res-Net architecture and three XCAT input slices achieved the best overall performance in the configuration study. In the task-based study, the anatomical accuracy of the generated synthetic CTs remained high (\(\mathrm{SSIM} = 0.64\) and \(\mathrm{FSIM} = 0.76\)). At the same time, the generated noise texture was close to real data with a noise power spectrum correlation coefficient of \(\mathrm{NCC} = 0.92\). Simultaneously, we observed an improvement in annotation accuracy of 65% when using the dedicated loss function. The feasibility of a combined training on both real and synthetic data was demonstrated in a blood vessel segmentation task (dice similarity coefficient \(\mathrm {DSC}=0.83\pm 0.05\)).

Conclusion

CT synthesis using CycleGAN is a feasible approach to generate realistic images from simulated XCAT phantoms. Synthetic CTs generated with a task-based loss function can be used in addition to real data to improve the performance of segmentation networks.

Keywords

CT synthesis Generative adversarial networks CycleGAN Simulation-based deep learning Physical modeling 

Notes

Acknowledgements

We are thankful to Joshua Gawlitza and Leonard Chandra for their support regarding the CT data and the vessel segmentations.

Funding

This research project is part of the Research Campus M\(^2\)OLIE and funded by the German Federal Ministry of Education and Research (BMBF) within the Framework ’Forschungscampus - Public–Private Partnership for Innovation’ under the funding code 13GW0388A. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the NVIDIA Titan Xp GPU used for this research.

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

Informed consent

Informed consent was obtained from all individual participants included in the study.

References

  1. 1.
    Bermudez C, Plassard AJ, Davis LT, Newton AT, Resnick SM, Landman BA (2018) Learning implicit brain MRI manifolds with deep learning. In: Proceedings of SPIE 10574, medical imaging 2018: image processing, vol 105741LGoogle Scholar
  2. 2.
    Canny J (1986) A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell 8(6):679–698CrossRefGoogle Scholar
  3. 3.
    Chen L, Jiang F, Zhang H, Wu S, Yu S, Xie Y (2016) Edge preservation ratio for image sharpness assessment. In: 2016 12th World congress on intelligent control and automation (WCICA), IEEE, pp 1377–1381Google Scholar
  4. 4.
    Christ P, Ettlinger F, Lipkova J, Kaissis G (2017) LiTS—liver tumor segmentation challenge http://www.lits-challenge.com/. Accessed 1 Aug 2019
  5. 5.
    Costa P, Galdran A, Meyer MI, Niemeijer M, Abrámoff M, Mendonça AM, Campilho A (2018) End-to-end adversarial retinal image synthesis. IEEE Trans Med Imaging 37(3):781–791CrossRefGoogle Scholar
  6. 6.
    Guibas JT, Virdi TS, Li PS (2017) Synthetic medical images from dual generative adversarial networks. CoRR arXiv:1709.01872
  7. 7.
    Jin X, Qi Y, Wu S (2017) CycleGAN face-off. CoRR arXiv:1712.03451
  8. 8.
    Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JA, van Ginneken B, Sánchez CI (2017) A survey on deep learning in medical image analysis. Med Image Anal 42(2012):60–88CrossRefGoogle Scholar
  9. 9.
    Lundervold AS, Lundervold A (2019) An overview of deep learning in medical imaging focusing on MRI. Zeitschrift für Medizinische Physik 29(2):102–127CrossRefGoogle Scholar
  10. 10.
    Maier J, Sawall S, Knaup M, Kachelrieß M (2018) Deep scatter estimation (DSE): accurate real-time scatter estimation for X-ray CT using a deep convolutional neural network. J Nondestruct Eval 37(3):1–9CrossRefGoogle Scholar
  11. 11.
    Odena A, Dumoulin V, Olah C (2016) Deconvolution and checkerboard artifacts. Distill.  https://doi.org/10.23915/distill.00003 CrossRefGoogle Scholar
  12. 12.
    Olut S, Sahin YH, Demir U, Unal G (2018) Generative adversarial training for MRA image synthesis using multi-contrast MRI. In: PRedictive intelligence in MEdicine, pp 147–154CrossRefGoogle Scholar
  13. 13.
    Rührnschopf EP, Klingenbeck K (2011) A general framework and review of scatter correction methods in cone beam CT. Part 2: scatter estimation approaches. Med Phys 38(9):5186–5199CrossRefGoogle Scholar
  14. 14.
    Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L (2015) ImageNet large scale visual recognition challenge. Int J Comput Vis 115(3):211–252CrossRefGoogle Scholar
  15. 15.
    Schnurr AK, Chung K, Russ T, Schad LR, Zöllner FG (2019) Simulation-based deep artifact correction with convolutional neural networks for limited angle artifacts. Zeitschrift für Medizinische Physik 29(2):150–161CrossRefGoogle Scholar
  16. 16.
    Schnurr AK, Schad LR, Zöllner FG (2019) Sparsely connected convolutional layers in CNNs for liver segmentation in CT. In: Bildverarbeitung für die Medizin 2019, Springer, New York, pp 80–85Google Scholar
  17. 17.
    Segars WP, Sturgeon G, Mendonca S, Grimes J, Tsui BMW (2010) 4D XCAT Phantom for multimodality imaging research. Med Phys 37(9):4902–4915CrossRefGoogle Scholar
  18. 18.
    Sharp P, Barber DC, Brown DG, Burgess AE, Metz CE, Myers KJ, Taylor CJ, Wagner RF, Brooks R, Hill CR, Kuhl DE, Smith MA, Wells P, Worthington B (1996) Report 54. J Int Comm Radiat Units MeasGoogle Scholar
  19. 19.
    Shrivastava A, Pfister T, Tuzel O, Susskind J, Wang W, Webb R (2017) Learning from simulated and unsupervised images through adversarial training. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR), IEEE, pp 2242–2251Google Scholar
  20. 20.
    Soler L, Hostettler A, Agnus V, Charnoz A, Fasquel J, Moreau J, Osswald A, Bouhadjar M, Marescaux J (2010) 3D Image reconstruction for comparison of algorithm database: a patient specific anatomical and medical image database. https://www.ircad.fr/fr/recherche/3d-ircadb-01-fr/. Accessed 1 Aug 2019
  21. 21.
    Walek P, Jan J, Ourednicek P, Skotakova J, Jira I (2013) Methodology for estimation of tissue noise power spectra in iteratively reconstructed MDCT data. In: 21st International conference on computer graphics, visualization and computer vision, pp 243–252Google Scholar
  22. 22.
    Wang Z, Bovik AC, Sheikh HR (2004) Image quality assessment: from error measurement to structural similarity. IEEE Trans Image Proces 13(4):600–612CrossRefGoogle Scholar
  23. 23.
    Wang Z, Yang J, Jin H, Shechtman E, Agarwala A, Brandt J, Huang TS (2015) DeepFont: identify your font from an image. In: Proceedings of the 23rd ACM international conference on multimedia, MM’15, pp 451–459Google Scholar
  24. 24.
    Wolterink JM, Dinkla AM, Savenije MHF, Seevinck PR, van den Berg CAT, Išgum I (2017) Deep MR to CT synthesis using unpaired data. In: Simulation and synthesis in medical imaging, pp 14–23CrossRefGoogle Scholar
  25. 25.
    Wood E, Baltrušaitis T, Morency LP, Robinson P, Bulling A (2016) Learning an appearance-based Gaze estimator from one million synthesised images. In: Proceedings of the ninth biennial ACM symposium on eye tracking research and applications—ETRA ’16, New York, pp 131–138Google Scholar
  26. 26.
    Zhang L, Zhang L, Mou X, Zhang D (2011) FSIM: a feature similarity index for image quality assessment. IEEE Trans Image Proces 20(8):2378–2386CrossRefGoogle Scholar
  27. 27.
    Zhu J, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: 2017 IEEE International conference on computer vision (ICCV), IEEE, pp 2242–2251Google Scholar

Copyright information

© CARS 2019

Authors and Affiliations

  1. 1.Computer Assisted Clinical Medicine, Medical Faculty MannheimHeidelberg UniversityMannheimGermany
  2. 2.Austrian Center for Medical Innovation and TechnologyViennaAustria
  3. 3.Center for Medical Physics and Biomedical EngineeringMedical University ViennaViennaAustria

Personalised recommendations