Advertisement

Interactive Endoscopy: A Next-Generation, Streamlined User Interface for Lung Surgery Navigation

  • Paul ThienphrapaEmail author
  • Torre Bydlon
  • Alvin Chen
  • Prasad Vagdargi
  • Nicole Varble
  • Douglas Stanton
  • Aleksandra Popovic
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11768)

Abstract

Computer generated graphics are superimposed onto live video emanating from an endoscope, offering the surgeon visual information that is hiding in the native scene—this describes the classical scenario of augmented reality in minimally invasive surgery. Research efforts have, over the past few decades, pressed considerably against the challenges of infusing a priori knowledge into endoscopic streams. As framed, these contributions emulate perception at the level of the surgeon expert, perpetuating debates on the technical, clinical, and societal viability of the proposition.

We herein introduce interactive endoscopy, transforming passive visualization into an interface that allows the surgeon to label noteworthy anatomical features found in the endoscopic video, and have the virtual annotations remember their tissue locations during surgical manipulation. The streamlined interface combines vision-based tool tracking and speech recognition to enable interactive selection and labeling, followed by tissue tracking and optical flow for label persistence. These discrete capabilities have matured rapidly in recent years, promising technical viability of the system; it can help clinicians offload the cognitive demands of visually deciphering soft tissues; and supports societal viability by engaging, rather than emulating, surgeon expertise. Through a video-assisted thoracotomy use case, we develop a proof-of-concept to improve workflow by tracking surgical tools and visualizing tissue, while serving as a bridge to the classical promise of augmented reality in surgery.

Keywords

Interactive endoscopy Lung surgery VATS Augmented reality Human-computer interaction 

References

  1. 1.
    Healthcare Cost and Utilization Project. https://hcupnet.ahrq.gov/#setup
  2. 2.
    Reduced lung-cancer mortality with low-dose computed tomographic screening. New Engl. J. Med. 365(5), 395–409 (2011)Google Scholar
  3. 3.
    Allaf, M.E., et al.: Laparoscopic visual field. Surg. Endosc. 12(12), 1415–1418 (1998)CrossRefGoogle Scholar
  4. 4.
    Balicki, M., et al.: Interactive OCT annotation and visualization for vitreoretinal surgery. In: Linte, C.A., Chen, E.C.S., Berger, M.-O., Moore, J.T., Holmes, D.R. (eds.) AE-CAI 2012. LNCS, vol. 7815, pp. 142–152. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-38085-3_14CrossRefGoogle Scholar
  5. 5.
    Bernhardt, S., Nicolau, S.A., Soler, L., Doignon, C.: The status of augmented reality in laparoscopic surgery as of 2016. Med. Image Anal. 37, 66–90 (2017)CrossRefGoogle Scholar
  6. 6.
    Bodenstedt, S., et al.: Comparative evaluation of instrument segmentation and tracking methods in minimally invasive surgery (2018)Google Scholar
  7. 7.
    Carswell, C.M., Clarke, D., Seales, W.B.: Assessing mental workload during laparoscopic surgery. Surg. Innov. 12(1), 80–90 (2005)CrossRefGoogle Scholar
  8. 8.
    Chauvet, P., et al.: Augmented reality in a tumor resection model. Surg. Endosc. 32(3), 1192–1201 (2018)CrossRefGoogle Scholar
  9. 9.
    Collins, T., Bartoli, A., Bourdel, N., Canis, M.: Robust, real-time, dense and deformable 3D organ tracking in laparoscopic videos. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9900, pp. 404–412. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46720-7_47CrossRefGoogle Scholar
  10. 10.
    Doignon, C., Nageotte, F., de Mathelin, M.: Segmentation and guidance of multiple rigid objects for intra-operative endoscopic vision. In: Vidal, R., Heyden, A., Ma, Y. (eds.) WDV 2005-2006. LNCS, vol. 4358, pp. 314–327. Springer, Heidelberg (2007).  https://doi.org/10.1007/978-3-540-70932-9_24CrossRefGoogle Scholar
  11. 11.
    Du, X., et al.: Robust surface tracking combining features, intensity and illumination compensation. Int. J. Comput. Assist. Radiol. Surg. 10(12), 1915–1926 (2015)CrossRefGoogle Scholar
  12. 12.
    Elhawary, H., Popovic, A.: Robust feature tracking on the beating heart for a robotic-guided endoscope. Int. J. Med. Robot. Comput. Assist. Surg. 7(4), 459–468 (2011)CrossRefGoogle Scholar
  13. 13.
    Fischer, P., Dosovitskiy, A., Brox, T.: Descriptor matching with convolutional neural networks: a comparison to SIFT (2014)Google Scholar
  14. 14.
    Flores, R.M., et al.: Video-assisted thoracoscopic surgery (VATS) lobectomy: catastrophic intraoperative complications. J. Thorac. Cardiovasc. Surg. 142(6), 1412–1417 (2011)CrossRefGoogle Scholar
  15. 15.
    Fuchs, H., et al.: Augmented reality visualization for laparoscopic surgery. In: Wells, W.M., Colchester, A., Delp, S. (eds.) MICCAI 1998. LNCS, vol. 1496, pp. 934–943. Springer, Heidelberg (1998).  https://doi.org/10.1007/BFb0056282CrossRefGoogle Scholar
  16. 16.
    Kim, J.-H., Bartoli, A., Collins, T., Hartley, R.: Tracking by detection for interactive image augmentation in laparoscopy. In: Dawant, B.M., Christensen, G.E., Fitzpatrick, J.M., Rueckert, D. (eds.) WBIR 2012. LNCS, vol. 7359, pp. 246–255. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-31340-0_26CrossRefGoogle Scholar
  17. 17.
    Kinsinger, L.S., et al.: Implementation of lung cancer screening in the Veterans Health Administration. JAMA Intern. Med. 177(3), 399–406 (2017)CrossRefGoogle Scholar
  18. 18.
    Lee, C.Y., et al.: Novel thoracoscopic navigation system with augmented real-time image guidance for chest wall tumors. Ann. Thorac. Surg. 106(5), 1468–1475 (2018)CrossRefGoogle Scholar
  19. 19.
    Lin, J., et al.: Dual-modality endoscopic probe for tissue surface shape reconstruction and hyperspectral imaging enabled by deep neural networks. Med. Image Anal. 48, 162–176 (2018)CrossRefGoogle Scholar
  20. 20.
    Liu, W.P., Richmon, J.D., Sorger, J.M., Azizian, M., Taylor, R.H.: Augmented reality and CBCT guidance for transoral robotic surgery. J. Robot. Surg. 9(3), 223–233 (2015)CrossRefGoogle Scholar
  21. 21.
    Mahmoud, N., Collins, T., Hostettler, A., Soler, L., Doignon, C., Montiel, J.M.M.: Live tracking and dense reconstruction for handheld monocular endoscopy. IEEE Trans. Med. Imaging 38(1), 79–89 (2019)CrossRefGoogle Scholar
  22. 22.
    Maier-Hein, L., et al.: Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery. Med. Image Anal. 17(8), 974–996 (2013)CrossRefGoogle Scholar
  23. 23.
    Mountney, P., Yang, G.-Z.: Motion compensated SLAM for image guided surgery. In: Jiang, T., Navab, N., Pluim, J.P.W., Viergever, M.A. (eds.) MICCAI 2010. LNCS, vol. 6362, pp. 496–504. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15745-5_61CrossRefGoogle Scholar
  24. 24.
    Nicolau, S., Soler, L., Mutter, D., Marescaux, J.: Augmented reality in laparoscopic surgical oncology. Surg. Oncol. 20(3), 189–201 (2011)CrossRefGoogle Scholar
  25. 25.
    Puerto-Souza, G.A., Cadeddu, J.A., Mariottini, G.L.: Toward long-term and accurate augmented-reality for monocular endoscopic videos. IEEE Trans. Biomed. Eng. 61(10), 2609–2620 (2014)CrossRefGoogle Scholar
  26. 26.
    Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  27. 27.
    Shvets, A.A., Rakhlin, A., Kalinin, A.A., Iglovikov, V.I.: Automatic instrument segmentation in robot-assisted surgery using deep learning. In: IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 624–628 (2018)Google Scholar
  28. 28.
    Sotiras, A., Davatzikos, C., Paragios, N.: Deformable medical image registration: a survey. IEEE Trans. Med. Imaging 32(7), 1153–1190 (2013)CrossRefGoogle Scholar
  29. 29.
    Stoyanov, D., Scarzanella, M.V., Pratt, P., Yang, G.-Z.: Real-time stereo reconstruction in robotically assisted minimally invasive surgery. In: Jiang, T., Navab, N., Pluim, J.P.W., Viergever, M.A. (eds.) MICCAI 2010. LNCS, vol. 6361, pp. 275–282. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15705-9_34CrossRefGoogle Scholar
  30. 30.
    Thienphrapa, P., Bydlon, T., Chen, A., Popovic, A.: Evaluation of surface feature persistence during lung surgery. In: BMES Annual Meeting, Atlanta, GA (2018)Google Scholar
  31. 31.
    Willekes, L., Boutros, C., Goldfarb, M.A.: VATS intraoperative tattooing to facilitate solitary pulmonary nodule resection. J. Cardiothorac. Surg. 3(1), 13 (2008)CrossRefGoogle Scholar
  32. 32.
    Yip, M.C., Lowe, D.G., Salcudean, S.E., Rohling, R.N., Nguan, C.Y.: Tissue tracking and registration for image-guided surgery. IEEE Trans. Med. Imaging 31(11), 2169–2182 (2012)CrossRefGoogle Scholar
  33. 33.
    Zagoruyko, S., Komodakis, N.: Learning to compare image patches via CNNs. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4353–4361 (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Paul Thienphrapa
    • 1
    Email author
  • Torre Bydlon
    • 1
  • Alvin Chen
    • 1
  • Prasad Vagdargi
    • 2
  • Nicole Varble
    • 1
  • Douglas Stanton
    • 1
  • Aleksandra Popovic
    • 1
  1. 1.Philips Research North AmericaCambridgeUSA
  2. 2.I-STAR LabJohns Hopkins UniversityBaltimoreUSA

Personalised recommendations