Advertisement

A Two-Stage Framework for Real-Time Guidewire Endpoint Localization

  • Rui-Qi Li
  • Guibin Bian
  • Xiaohu Zhou
  • Xiaoliang Xie
  • ZhenLiang Ni
  • Zengguang HouEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11768)

Abstract

The ability of real-time instrument tracking is a stepping stone to various computer-assisted interventions. In this paper, we introduce a two-stage framework for real-time guidewire endpoint localization in fluoroscopy images during the percutaneous coronary intervention. In the first stage, in order to predict all bounding boxes that contain a guidewire, a YOLOv3 detector is applied, and following the detector, a post-processing algorithm is proposed to refine the bounding boxes produced by the detector. In the second stage, an SA-hourglass network modified on stacked hourglass network is proposed, to predict dense heatmap of the guidewire endpoints that may be contained in each bounding box. Although our SA-hourglass network is designed for endpoint localization of guidewire, in fact, we believe the network can be generalized to the keypoint localization task of other surgical instruments. In order to prove our view, SA-hourglass network is trained not only on a guidewire dataset but also a retinal microsurgery dataset, and both achieve the state-of-the-art localization results.

Keywords

Guidewire Keypoint localization Surgical instrument 

Notes

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (Grants 61533016, U1713220, U1613210), by the National Key Research and Development Program of China under Grant 2017YFB1302704, by the Strategic Priority Research Program of CAS under Grant XDBS01040100.

Supplementary material

490279_1_En_40_MOESM1_ESM.mp4 (14.5 mb)
Supplementary material 1 (mp4 14871 KB)

References

  1. 1.
    Mazomenos, E.B., et al.: A survey on the current status and future challenges towards objective skills assessment in endovascular surgery. J. Med. Robot. Res. 01(03), 1640010 (2016)CrossRefGoogle Scholar
  2. 2.
    Ambrosini, P., Ruijters, D., Niessen, W.J., Moelker, A., van Walsum, T.: Fully automatic and real-time catheter segmentation in X-Ray fluoroscopy. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10434, pp. 577–585. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66185-8_65CrossRefGoogle Scholar
  3. 3.
    Vandini, A., Glocker, B., Hamady, M., Yang, G.Z.: Robust guidewire tracking under large deformations combining segment-like features (SEGlets). Med. Image Anal. 38, 150–164 (2017)CrossRefGoogle Scholar
  4. 4.
    Heibel, H., Glocker, B., Groher, M., Pfister, M., Navab, N.: Interventional tool tracking using discrete optimization. IEEE Trans. Med. Imaging 32(3), 544–555 (2013)CrossRefGoogle Scholar
  5. 5.
    Kurmann, T., et al.: Simultaneous recognition and pose estimation of instruments in minimally invasive surgery. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10434, pp. 505–513. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66185-8_57CrossRefGoogle Scholar
  6. 6.
    Sznitman, R., Ali, K., Richa, R., Taylor, R.H., Hager, G.D., Fua, P.: Data-driven visual tracking in retinal microsurgery. In: Ayache, N., Delingette, H., Golland, P., Mori, K. (eds.) MICCAI 2012. LNCS, vol. 7511, pp. 568–575. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-33418-4_70CrossRefGoogle Scholar
  7. 7.
    Laina, I., et al.: Concurrent segmentation and localization for tracking of surgical instruments. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10434, pp. 664–672. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66185-8_75CrossRefGoogle Scholar
  8. 8.
    Papandreou, G., et al.: Towards accurate multi-person pose estimation in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4903–4911 (2017)Google Scholar
  9. 9.
    Redmon, J., Farhadi, A.: YOLOv3: an incremental improvement. arXiv preprint arXiv:1804.02767 (2018)
  10. 10.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2016)CrossRefGoogle Scholar
  11. 11.
    Newell, A., Yang, K., Deng, J.: Stacked hourglass networks for human pose estimation. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 483–499. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46484-8_29CrossRefGoogle Scholar
  12. 12.
    Chu, X., Yang, W., Ouyang, W., Ma, C., Yuille, A.L., Wang, X.: Multi-context attention for human pose estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5669–5678 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Rui-Qi Li
    • 1
    • 2
  • Guibin Bian
    • 1
    • 2
  • Xiaohu Zhou
    • 1
    • 2
  • Xiaoliang Xie
    • 1
    • 2
  • ZhenLiang Ni
    • 1
    • 2
  • Zengguang Hou
    • 1
    • 2
    • 3
    Email author
  1. 1.State Key Laboratory of Management and Control for Complex Systems, Institute of AutomationChinese Academy of SciencesBeijingChina
  2. 2.University of Chinese Academy of SciencesBeijingChina
  3. 3.CAS Center for Excellence in Brain Science and Intelligence TechnologyBeijingChina

Personalised recommendations