Advertisement

Deep Neural Architecture for Localization and Tracking of Surgical Tools in Cataract Surgery

  • Neha Banerjee
  • Rachana SathishEmail author
  • Debdoot Sheet
Conference paper
Part of the Lecture Notes in Computational Vision and Biomechanics book series (LNCVB, volume 31)

Abstract

Over the last couple of decades, the quality of surgical interventions has improved owing to the use of computer vision and robotic assistance. One such application of computer vision, namely, detection of surgical tools in videos is gaining attention of the medical image processing community. The main motivation for detection, localization, and annotation of surgical tools is to develop applications for surgical wsorkflow analysis. Such an analysis can aid in report generation, real-time decision support, etc. Cataract surgery is one of the common surgical procedure where surgeons do have direct visual access to the surgical site. Extremely small tools are used for this procedure and the surgeons observe the surgical site through a surgical microscope. In such cases, detecting the presence of tools can act an additional aid to the surgeon as well as other surgical staffs. We propose a framework consisting of a Convolutional Neural Network (CNN) which learns to distinguish and detect the presence of various surgical tools by learning robust features from the frames of a surgical video. Various deep neural architectures are hence evaluated for the task of detecting tools. The baseline models used for the purpose are pretrained on Imagenet dataset and they render upto 50% prediction accuracy. All the experiments have been validated on the dataset released as part of the Cataracts Grand Challenge. A framework for localization and detection of tools has also been proposed, which is capable of extracting visual features from glimpses of an image, by adaptively selecting and processing only the selected regions at high resolution.

Keywords

Cataract surgery Multiple tool detection CNN Deep neural architectures Class imbalance Glimpse network 

References

  1. 1.
    Al Hajj H, Lamard M, Charrière K, Cochener B, Quellec G (2017) Surgical tool detection in cataract surgery videos through multi-image fusion inside a convolutional neural network. In: 2017 39th annual international conference of the IEEE engineering in medicine and biology society (EMBC), pp 2002–2005Google Scholar
  2. 2.
    Girshick R, Donahue J, Darrell T, Malik J (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 580–587Google Scholar
  3. 3.
    He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778Google Scholar
  4. 4.
    Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105Google Scholar
  5. 5.
    Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556
  6. 6.
    Twinanda AP, Shehata S, Mutter D, Marescaux J, de Mathelin M, Padoy N (2017) Endonet: a deep architecture for recognition tasks on laparoscopic videos. IEEE Trans Med Imag 36(1):86–97CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Indian Institute of Technology, KharagpurKharagpurIndia

Personalised recommendations