Advertisement

Catheter Synthesis in X-Ray Fluoroscopy with Generative Adversarial Networks

  • Ihsan Ullah
  • Philip Chikontwe
  • Sang Hyun ParkEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11843)

Abstract

Accurate localization of catheters or guidewires in fluoroscopy images is important to improve the stability of intervention procedures as well as the development of surgical navigation systems. Recently, deep learning methods have been proposed to improve performance, however these techniques require extensive pixel-wise annotations. Moreover, the human annotation effort is equally expensive. In this study, we mitigate this labeling effort using generative adversarial networks (cycleGAN) wherein we synthesize realistic catheters in flouroscopy from localized guidewires in camera images whose annotations are cheaper to acquire. Our approach is motivated by the fact that catheters are tubular structures with varying profiles, thus given a guidewire in a camera image, we can obtain the centerline that follows the profile of a catheter in an X-ray image and create plausible X-ray images composited with such a centerline. In order to generate an image similar to the actual X-ray image, we propose a loss term that includes perceptual loss alongside the standard cycle loss. Experimental results show that the proposed method has better performance than the conventional GAN and generates images with consistent quality. Further, we provide evidence to the development of methods that leverage such synthetic composite images in supervised settings.

Keywords

Adversarial learning Catheter robot Convolutional neural networks Image translation Image synthesis 

Notes

Acknowledgment

This work is supported by the Robot industry fusion core technology development project through the Korea Evaluation Institute of Industrial Technology (KEIT) funded by the Ministry of Trade, Industry and Energy of Korea (MOTIE) (NO. 10052980) and the DGIST R & D Program of the Ministry of Science and ICT (19-RT-01).

References

  1. 1.
    Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017Google Scholar
  2. 2.
    Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46475-6_43CrossRefGoogle Scholar
  3. 3.
    Subramanian, V., Wang, H., Wu, J.T., Wong, K.C., Sharma, A., Syeda-Mahmood, T.: Automated detection and type classification of central venous catheters in chest X-rays. arXiv preprint arXiv:1907.01656 (2019)
  4. 4.
    Tmenova, O., Martin, R., Duong, L.: CycleGAN for style transfer in X-ray angiography. Int. J. Comput. Assist. Radiol. Surg. 1–10 (2019)Google Scholar
  5. 5.
    Uherčík, M., Kybic, J., Zhao, Y., Cachard, C., Liebgott, H.: Line filtering for surgical tool localization in 3D ultrasound images. Comput. Biol. Med. 43(12), 2036–2045 (2013) CrossRefGoogle Scholar
  6. 6.
    Vandini, A., Glocker, B., Hamady, M., Yang, G.Z.: Robust guidewire tracking under large deformations combining segment-like features (SEGlets). Med. Image Anal. 38, 150–164 (2017)CrossRefGoogle Scholar
  7. 7.
    Wagner, M.G., Laeseke, P., Speidel, M.A.: Deep learning based guidewire segmentation in X-ray images. In: Medical Imaging 2019: Physics of Medical Imaging. vol. 10948, p. 1094844. International Society for Optics and Photonics (2019)Google Scholar
  8. 8.
    Yi, X., Adams, S., Babyn, P., Elnajmi, A.: Automatic catheter and tube detection in pediatric X-ray images using a scale-recurrent network and synthetic data. J. Digit. Imaging 1–10 (2019)Google Scholar
  9. 9.
    Ying, X., Guo, H., Ma, K., Wu, J., Weng, Z., Zheng, Y.: X2CT-GAN: reconstructing CT from biplanar X-rays with generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10619–10628 (2019)Google Scholar
  10. 10.
    Zhang, T., Suen, C.Y.: A fast parallel algorithm for thinning digital patterns. Commun. ACM 27(3), 236–239 (1984)CrossRefGoogle Scholar
  11. 11.
    Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016)CrossRefGoogle Scholar
  12. 12.
    Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Ihsan Ullah
    • 1
  • Philip Chikontwe
    • 1
  • Sang Hyun Park
    • 1
    Email author
  1. 1.Department of Robotics EngineeringDGISTDaeguRepublic of Korea

Personalised recommendations