Advertisement

Real-Time 3D Reconstruction of Colonoscopic Surfaces for Determining Missing Regions

  • Ruibin Ma
  • Rui WangEmail author
  • Stephen Pizer
  • Julian Rosenman
  • Sarah K. McGill
  • Jan-Michael Frahm
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11768)

Abstract

Colonoscopy is the most widely used medical technique to screen the human large intestine (colon) for cancer precursors. However, frequently parts of the surface are not visualized, and it is hard for the endoscopist to realize that from the video. Non-visualization derives from lack of orientations of the endoscope to the full circumference of parts of the colon, occlusion from colon structures, and intervening materials inside the colon. Our solution is real-time dense 3D reconstruction of colon chunks with display of the missing regions. We accomplish this by a novel deep-learning-driven dense SLAM (simultaneous localization and mapping) system that can produce a camera trajectory and a dense reconstructed surface for colon chunks (small lengths of colon). Traditional SLAM systems work poorly for the low-textured colonoscopy frames and are subject to severe scale/camera drift. In our method a recurrent neural network (RNN) is used to predict scale-consistent depth maps and camera poses of successive frames. These outputs are incorporated into a standard SLAM pipeline with local windowed optimization. The depth maps are finally fused into a global surface using the optimized camera poses. To the best of our knowledge, we are the first to reconstruct dense colon surface from video in real time and to display missing surface.

Keywords

Colonoscopy SLAM Reconstruction RNN 

Supplementary material

490279_1_En_64_MOESM1_ESM.zip (11.9 mb)
Supplementary material 1 (zip 12204 KB)

References

  1. 1.
    Armin, A., Chetty, G., De Visser, H., Dumas, C., Grimpen, F., Salvado, O.: Automated visibility map of the internal colon surface from colonoscopy video. Int. J. Comput. Assist. Radiol. Surg. 11 (2016).  https://doi.org/10.1007/s11548-016-1462-8CrossRefGoogle Scholar
  2. 2.
    Engel, J., Koltun, V., Cremers, D.: Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 40, 611–625 (2018)CrossRefGoogle Scholar
  3. 3.
    Engel, J., Schöps, T., Cremers, D.: LSD-SLAM: large-scale direct monocular SLAM. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8690, pp. 834–849. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10605-2_54CrossRefGoogle Scholar
  4. 4.
    Hong, D., Tavanapong, W., Wong, J., Oh, J., De Groen, P.C.: 3d reconstruction of virtual colon structures from colonoscopy images. Comput. Med. Imaging Graph. 38(1), 22–33 (2014)CrossRefGoogle Scholar
  5. 5.
    Hong, W., Wang, J., Qiu, F., Kaufman, A., Anderson, J.: Colonoscopy simulation. In: Proceedings of SPIE (2007)Google Scholar
  6. 6.
    Jemal, A., Center, M.M., DeSantis, C., Ward, E.M.: Global patterns of cancer incidence and mortality rates and trends. Cancer Epidemiol. Prev. Biomark. 19(8), 1893–1907 (2010)CrossRefGoogle Scholar
  7. 7.
    Mur-Artal, R., Montiel, J.M.M., Tardós, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31(5), 1147–1163 (2015)CrossRefGoogle Scholar
  8. 8.
    van Rijn, J.C., Reitsma, J.B., Stoker, J., Bossuyt, P., van Deventer, S., Dekker, E.: Polyp miss rate determined by tandem colonoscopy: a systematic review. Am. J. Gastroenterol. 101, 343 (2006)CrossRefGoogle Scholar
  9. 9.
    Schönberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2016)Google Scholar
  10. 10.
    Schöps, T., Sattler, T., Pollefeys, M.: Surfelmeshing: online surfel-based mesh reconstruction. CoRR abs/1810.00729 (2018). http://arxiv.org/abs/1810.00729
  11. 11.
    Tateno, K., Tombari, F., Laina, I., Navab, N.: CNN-SLAM: real-time dense monocular SLAM with learned depth prediction. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6565–6574, July 2017Google Scholar
  12. 12.
    Wang, R., Pizer, S.M., Frahm, J.M.: Recurrent neural network for (un-)supervised learning of monocular video visual odometry and depth. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)Google Scholar
  13. 13.
    Yang, N., Wang, R., Stückler, J., Cremers, D.: Deep virtual stereo odometry: leveraging deep depth prediction for monocular direct sparse odometry. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11212, pp. 835–852. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01237-3_50CrossRefGoogle Scholar
  14. 14.
    Yin, Z., Shi, J.: GeoNet: unsupervised learning of dense depth, optical flow and camera pose. In: CVPR, pp. 1983–1992 (2018)Google Scholar
  15. 15.
    Zhao, Q., Price, T., Pizer, S., Niethammer, M., Alterovitz, R., Rosenman, J.: The endoscopogram: a 3D model reconstructed from endoscopic video frames. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9900, pp. 439–447. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46720-7_51CrossRefGoogle Scholar
  16. 16.
    Zhou, T., Brown, M., Snavely, N., Lowe, D.G.: Unsupervised learning of depth and ego-motion from video. In: CVPR (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Ruibin Ma
    • 1
  • Rui Wang
    • 1
    Email author
  • Stephen Pizer
    • 1
  • Julian Rosenman
    • 1
  • Sarah K. McGill
    • 1
  • Jan-Michael Frahm
    • 1
  1. 1.University of North Carolina at Chapel HillChapel HillUSA

Personalised recommendations