Advertisement

Enabling machine learning in X-ray-based procedures via realistic simulation of image formation

  • Mathias UnberathEmail author
  • Jan-Nico Zaech
  • Cong Gao
  • Bastian Bier
  • Florian Goldmann
  • Sing Chun Lee
  • Javad Fotouhi
  • Russell Taylor
  • Mehran Armand
  • Nassir Navab
Original Article

Abstract

Purpose

Machine learning-based approaches now outperform competing methods in most disciplines relevant to diagnostic radiology. Image-guided procedures, however, have not yet benefited substantially from the advent of deep learning, in particular because images for procedural guidance are not archived and thus unavailable for learning, and even if they were available, annotations would be a severe challenge due to the vast amounts of data. In silico simulation of X-ray images from 3D CT is an interesting alternative to using true clinical radiographs since labeling is comparably easy and potentially readily available.

Methods

We extend our framework for fast and realistic simulation of fluoroscopy from high-resolution CT, called DeepDRR, with tool modeling capabilities. The framework is publicly available, open source, and tightly integrated with the software platforms native to deep learning, i.e., Python, PyTorch, and PyCuda. DeepDRR relies on machine learning for material decomposition and scatter estimation in 3D and 2D, respectively, but uses analytic forward projection and noise injection to ensure acceptable computation times. On two X-ray image analysis tasks, namely (1) anatomical landmark detection and (2) segmentation and localization of robot end-effectors, we demonstrate that convolutional neural networks (ConvNets) trained on DeepDRRs generalize well to real data without re-training or domain adaptation. To this end, we use the exact same training protocol to train ConvNets on naïve and DeepDRRs and compare their performance on data of cadaveric specimens acquired using a clinical C-arm X-ray system.

Results

Our findings are consistent across both considered tasks. All ConvNets performed similarly well when evaluated on the respective synthetic testing set. However, when applied to real radiographs of cadaveric anatomy, ConvNets trained on DeepDRRs significantly outperformed ConvNets trained on naïve DRRs (\(p<0.01\)).

Conclusion

Our findings for both tasks are positive and promising. Combined with complementary approaches, such as image style transfer, the proposed framework for fast and realistic simulation of fluoroscopy from CT contributes to promoting the implementation of machine learning in X-ray-guided procedures. This paradigm shift has the potential to revolutionize intra-operative image analysis to simplify surgical workflows.

Keywords

Monte Carlo simulation Artificial intelligence Computer assisted surgery Robotic surgery Segmentation Image guidance 

Notes

Acknowledgements

We gratefully acknowledge the support of R21 EB020113, R01 EB016703, R01 EB0223939, and the NVIDIA Corporation with the donation of the GPUs used for this research.

Compliance with ethical standards

Disclaimer

The concepts and information presented in this paper are based on research and are not commercially available.

Conflict of interest

The authors have no conflict of interest to declare.

Informed consent

This article does not contain patient data.

References

  1. 1.
    Albarqouni S, Fotouhi J, Navab N (2017) X-ray in-depth decomposition: revealing the latent structures. In: International conference on medical image computing and computer-assisted intervention. Springer, Berlin, pp 444–452Google Scholar
  2. 2.
    Badal A, Badano A (2009) Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit. Med Phys 36(11):4878–4880CrossRefGoogle Scholar
  3. 3.
    Bakic PR, Myers KJ, Glick SJ, Maidment AD (2016) Virtual tools for the evaluation of breast imaging: state-of-the science and future directions. In: International workshop on digital mammography. Springer, Berlin, pp 518–524Google Scholar
  4. 4.
    Baumgartner R, Libuit K, Ren D, Bakr O, Singh N, Kandemir U, Marmor MT, Morshed S (2016) Reduction of radiation exposure from C-arm fluoroscopy during orthopaedic trauma operations with introduction of real-time dosimetry. J Orthop Trauma 3(2):e53–e58CrossRefGoogle Scholar
  5. 5.
    Bier B, Goldmann F, Zaech JN, Fotouhi J, Hegeman R, Grupp R, Armand M, Osgood G, Navab N, Maier A, Unberath M (2019) Learning to detect anatomical landmarks of the pelvis in X-rays from arbitrary views. Int J Comput Assisted Radiol Surg 20:1–11Google Scholar
  6. 6.
    Bier B, Unberath M, Zaech JN, Fotouhi J, Armand M, Osgood G, Navab N, Maier A (2018) X-ray-transform invariant anatomical landmark detection for pelvic trauma surgery. In: International conference on medical image computing and computer-assisted intervention. Springer, BerlinGoogle Scholar
  7. 7.
    Chen Y, Shi F, Christodoulou AG, Xie Y, Zhou Z, Li D (2018) Efficient and accurate MRI super-resolution using a generative adversarial network and 3D multi-level densely connected network. In: International conference on medical image computing and computer-assisted intervention. Springer, Berlin, pp 91–99Google Scholar
  8. 8.
    Clark K, Vendt B, Smith K, Freymann J, Kirby J, Koppel P, Moore S, Phillips S, Maffitt D, Pringle M, Tarbox L (2013) The cancer imaging archive (tcia): maintaining and operating a public information repository. J Dig Imaging 26(6):1045–1057CrossRefGoogle Scholar
  9. 9.
    De Silva T, Punnoose J, Uneri A, Goerres J, Jacobson M, Ketcha MD, Manbachi A, Vogt S, Kleinszig G, Khanna AJ, Wolinsky JP, Osgood G, Siewerdsen J (2017) C-arm positioning using virtual fluoroscopy for image-guided surgery. In: Webster RJ III, Fei B (eds) Medical imaging 2017: image-guided procedures, robotic interventions, and modeling, vol 10135. International Society for Optics and Photonics, Bellingham, p 101352KGoogle Scholar
  10. 10.
    Gao C, Unberath M, Taylor R, Armand M (2019) Localizing dexterous surgical tools in X-ray for image-based navigation. arXiv preprintGoogle Scholar
  11. 11.
    Hubbell JH, Seltzer SM (1995) Tables of X-ray mass attenuation coefficients and mass energy-absorption coefficients 1 keV to 20 MeV for elements Z = 1 to 92 and 48 additional substances of dosimetric interest. Technical report, National Institute of Standards and TechnologyGoogle Scholar
  12. 12.
    Kamnitsas K, Ledig C, Newcombe VF, Simpson JP, Kane AD, Menon DK, Rueckert D, Glocker B (2017) Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med Image Anal 36:61–78CrossRefGoogle Scholar
  13. 13.
    Kooi T, Litjens G, van Ginneken B, Gubern-Mérida A, Sánchez CI, Mann R, den Heeten A, Karssemeijer N (2017) Large scale deep learning for computer aided detection of mammographic lesions. Med Image Anal 35:303–312CrossRefGoogle Scholar
  14. 14.
    Kügler D, Stefanov A, Mukhopadhyay A (2018) i3posnet: instrument pose estimation from X-ray. arXiv preprint arXiv:1802.09575
  15. 15.
    Li Y, Liang W, Zhang Y, An H, Tan J (2016) Automatic lumbar vertebrae detection based on feature fusion deep learning for partial occluded C-arm X-ray images. In: 2016 IEEE 38th annual international conference of the engineering in medicine and biology society (EMBC). IEEE, pp 647–650Google Scholar
  16. 16.
    Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, Van Der Laak JA, Van Ginneken B, Sánchez CI (2017) A survey on deep learning in medical image analysis. Med Image Anal 42:60–88CrossRefGoogle Scholar
  17. 17.
    Liu X, Sinha A, Unberath M, Ishii M, Hager GD, Taylor RH, Reiter A (2018) Self-supervised learning for dense depth estimation in monocular endoscopy. In: OR 2.0 context-aware operating theaters, computer assisted robotic endoscopy, clinical image-based procedures, and skin image analysis. Springer, Berlin, pp 128–138Google Scholar
  18. 18.
    Mahmood F, Durr NJ (2018) Deep learning and conditional random fields-based depth estimation and topographical reconstruction from conventional endoscopy. Med Image Anal 48:230CrossRefGoogle Scholar
  19. 19.
    Maier J, Berker Y, Sawall S, Kachelrieß M (2018) Deep scatter estimation (DSE): feasibility of using a deep convolutional neural network for real-time X-ray scatter prediction in cone-beam CT. In: Medical imaging 2018: physics of medical imaging, vol 10573. International Society for Optics and Photonics, p 105731LGoogle Scholar
  20. 20.
    Menze BH, Jakab A, Bauer S, Kalpathy-Cramer J, Farahani K, Kirby J, Burren Y, Porz N, Slotboom J, Wiest R, Lanczi L, Gerstner E, Weber M, Arbel T, Avants BB, Ayache N, Buendia P, Collins DL, Cordier N, Corso JJ, Criminisi A, Das T, Delingette H, Demiralp, Durst CR, Dojat M, Doyle S, Festa J, Forbes F, Geremia E, Glocker B, Golland P, Guo X, Hamamci A, Iftekharuddin KM, Jena R, John NM, Konukoglu E, Lashkari D, Mariz JA, Meier R, Pereira S, Precup D, Price SJ, Raviv TR, Reza SMS, Ryan M, Sarikaya D, Schwartz L, Shin H, Shotton J, Silva CA, Sousa N, Subbanna NK, Szekely G, Taylor TJ, Thomas OM, Tustison NJ, Unal G, Vasseur F, Wintermark M, Ye DH, Zhao L, Zhao B, Zikic D, Prastawa M, Reyes M, Leemput KV (2015) The multimodal brain tumor image segmentation benchmark (brats). IEEE Trans Med Imaging 34(10):1993CrossRefGoogle Scholar
  21. 21.
    Milletari F, Navab N, Ahmadi SA (2016) V-net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 fourth international conference on 3D vision (3DV). IEEE, pp 565–571Google Scholar
  22. 22.
    Murphy RJ, Kutzer MD, Segreti SM, Lucas BC, Armand M (2014) Design and kinematic characterization of a surgical manipulator with a focus on treating osteolysis. Robotica 32(6):835–850CrossRefGoogle Scholar
  23. 23.
    Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer, Berlin, pp 234–241Google Scholar
  24. 24.
    Roy AG, Conjeti S, Sheet D, Katouzian A, Navab N, Wachinger C (2017) Error corrective boosting for learning fully convolutional networks with limited data. In: International conference on medical image computing and computer-assisted intervention. Springer, Berlin, pp 231–239Google Scholar
  25. 25.
    Sankaranarayanan S, Balaji Y, Castillo CD, Chellappa R (2017) Generate to adapt: aligning domains using generative adversarial networks. ArXiv e-prints arXiv:1704.01705
  26. 26.
    Schneider W, Bortfeld T, Schlegel W (2000) Correlation between CT numbers and tissue parameters needed for Monte Carlo simulations of clinical dose distributions. Phys Med Biol 45(2):459CrossRefGoogle Scholar
  27. 27.
    Sempau J, Wilderman SJ, Bielajew AF (2000) DPM, a fast, accurate Monte Carlo code optimized for photon and electron radiotherapy treatment planning dose calculations. Phys Med Biol 45(8):2263CrossRefGoogle Scholar
  28. 28.
    Sharma S, Kapadia A, Abadi E, Fu W, Segars WP, Samei E (2018) A rapid GPU-based Monte-Carlo simulation tool for individualized dose estimations in CT. In: Medical imaging 2018: physics of medical imaging, vol 10573. International Society for Optics and Photonics, Bellingham, p 105733VGoogle Scholar
  29. 29.
    Shen D, Wu G, Suk HI (2017) Deep learning in medical image analysis. Annu Rev Biomed Eng 19:221–248CrossRefGoogle Scholar
  30. 30.
    Sisniega A, Zbijewski W, Badal A, Kyprianou I, Stayman J, Vaquero JJ, Siewerdsen J (2013) Monte Carlo study of the effects of system geometry and antiscatter grids on cone-beam CT scatter distributions. Med Phys 40(5):5CrossRefGoogle Scholar
  31. 31.
    Sudre CH, Li W, Vercauteren T, Ourselin S, Cardoso MJ (2017) Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In: Deep learning in medical image analysis and multimodal learning for clinical decision support. Springer, Berlin, pp 240–248Google Scholar
  32. 32.
    Terunuma T, Tokui A, Sakae T (2018) Novel real-time tumor-contouring method using deep learning to prevent mistracking in X-ray fluoroscopy. Radiol Phys Technol 11(1):43–53CrossRefGoogle Scholar
  33. 33.
    Unberath M, Fotouhi J, Hajek J, Maier A, Osgood G, Taylor R, Armand M, Navab N (2018) Augmented reality-based feedback for technician-in-the-loop C-arm repositioning. Healthc Technol Lett 5(5):143–147CrossRefGoogle Scholar
  34. 34.
    Unberath M, Zaech JN, Lee SC, Bier B, Fotouhi J, Armand M, Navab N (2018) Deepdrr—a catalyst for machine learning in fluoroscopy-guided procedures. In: International conference on medical image computing and computer-assisted intervention. Springer, BerlinGoogle Scholar
  35. 35.
    Visentini-Scarzanella M, Sugiura T, Kaneko T, Koto S (2017) Deep monocular 3D reconstruction for assisted navigation in bronchoscopy. Int J Comput Assisted Radiol Surg 12(7):1089–1099CrossRefGoogle Scholar
  36. 36.
    Wei SE, Ramakrishna V, Kanade T, Sheikh Y (2016) Convolutional pose machines. In: CVPR, pp 4724–4732Google Scholar
  37. 37.
    Würfl T, Hoffmann M, Christlein V, Breininger K, Huang Y, Unberath M, Maier AK (2018) Deep learning computed tomography: learning projection-domain weights from image domain in limited angle problems. IEEE Trans Med Imaging 37(6):1454–1463CrossRefGoogle Scholar
  38. 38.
    Xu S, Prinsen P, Wiegert J, Manjeshwar R (2017) Deep residual learning in CT physics: scatter correction for spectral CT. arXiv preprint arXiv:1708.04151
  39. 39.
    Yushkevich PA, Piven J, Hazlett HC, Smith RG, Ho S, Gee JC, Gerig G (2006) User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability. Neuroimage 31(3):1116–1128CrossRefGoogle Scholar
  40. 40.
    Zeng G, Yang X, Li J, Yu L, Heng PA, Zheng G (2017) 3D U-net with multi-level deep supervision: fully automatic segmentation of proximal femur in 3D MR images. In: International workshop on machine learning in medical imaging, pp 274–282. Springer, BerlinGoogle Scholar
  41. 41.
    Zhang H, Ouyang L, Ma J, Huang J, Chen W, Wang J (2014) Noise correlation in CBCT projection data and its application for noise reduction in low-dose CBCT. Med Phys 41(3):031906CrossRefGoogle Scholar
  42. 42.
    Zhang Y, Miao S, Mansi T, Liao R (2018) Task driven generative modeling for unsupervised domain adaptation: application to X-ray image segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer, BerlinGoogle Scholar
  43. 43.
    Zhu JY, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprintGoogle Scholar

Copyright information

© CARS 2019

Authors and Affiliations

  • Mathias Unberath
    • 1
    • 2
    • 3
    Email author
  • Jan-Nico Zaech
    • 2
    • 3
  • Cong Gao
    • 1
    • 2
  • Bastian Bier
    • 2
    • 3
  • Florian Goldmann
    • 2
    • 3
  • Sing Chun Lee
    • 1
    • 2
    • 3
  • Javad Fotouhi
    • 1
    • 2
    • 3
  • Russell Taylor
    • 1
    • 2
  • Mehran Armand
    • 2
    • 4
  • Nassir Navab
    • 1
    • 2
    • 3
  1. 1.Department of Computer ScienceJohns Hopkins UniversityBaltimoreUSA
  2. 2.Laboratory for Computational Sensing + RoboticsJohns Hopkins UniversityBaltimoreUSA
  3. 3.Computer Aided Medical ProceduresJohns Hopkins UniversityBaltimoreUSA
  4. 4.Johns Hopkins University Applied Physics LaboratoryLaurelUSA

Personalised recommendations