Advertisement

Instrument Tracking via Online Learning in Retinal Microsurgery

  • Yeqing Li
  • Chen Chen
  • Xiaolei Huang
  • Junzhou Huang
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8673)

Abstract

Robust visual tracking of instruments is an important task in retinal microsurgery. In this context, the instruments are subject to a large variety of appearance changes due to illumination and other changes during a procedure, which makes the task very challenging. Most existing methods require collecting a sufficient amount of labelled data and yet perform poorly in handling appearance changes that are unseen in training data. To address these problems, we propose a new approach for robust instrument tracking. Specifically, we adopt an online learning technique that collects appearance samples of instruments on the fly and gradually learns a target-specific detector. Online learning enables the detector to reinforce its model and become more robust over time. The performance of the proposed method has been evaluated on a fully annotated dataset of retinal instruments in in-vivo retinal microsurgery and on a laparoscopy image sequence. In all experimental results, our proposed tracking approach shows superior performance compared to several other state-of-the-art approaches.

Keywords

Online Learning Appearance Model Visual Tracking Median Flow Appearance Change 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Pezzementi, Z., Voros, S., Hager, G.D.: Articulated object tracking by rendering consistent appearance parts. In: IEEE International Conference on Robotics and Automation, ICRA 2009, pp. 3940–3947 (2009)Google Scholar
  2. 2.
    Sznitman, R., Basu, A., Richa, R., Handa, J., Gehlbach, P., Taylor, R.H., Jedynak, B., Hager, G.D.: Unified detection and tracking in retinal microsurgery. In: Fichtinger, G., Martel, A., Peters, T. (eds.) MICCAI 2011, Part I. LNCS, vol. 6891, pp. 1–8. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  3. 3.
    Burschka, D., Corso, J.J., Dewan, M., Lau, W., Li, M., Lin, H., Marayong, P., Ramey, N., Hager, G.D., Hoffman, B., et al.: Navigating inner space: 3-D assistance for minimally invasive surgery. Robotics and Autonomous Systems 52(1), 5–26 (2005)Google Scholar
  4. 4.
    Richa, R., Balicki, M., Meisner, E., Sznitman, R., Taylor, R., Hager, G.: Visual tracking of surgical tools for proximity detection in retinal surgery. In: Taylor, R.H., Yang, G.-Z. (eds.) IPCAI 2011. LNCS, vol. 6689, pp. 55–66. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  5. 5.
    Sznitman, R., Ali, K., Richa, R., Taylor, R.H., Hager, G.D., Fua, P.: Data-driven visual tracking in retinal microsurgery. In: Ayache, N., Delingette, H., Golland, P., Mori, K. (eds.) MICCAI 2012, Part II. LNCS, vol. 7511, pp. 568–575. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  6. 6.
    Grabner, H., Grabner, M., Bischof, H.: Real-time tracking via on-line boosting. In: BMVC, vol. 1, p. 6 (2006)Google Scholar
  7. 7.
    Babenko, B., Yang, M.H., Belongie, S.: Visual tracking with online multiple instance learning. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 983–990 (2009)Google Scholar
  8. 8.
    Liu, B., Huang, J., Yang, L., Kulikowsk, C.: Robust tracking using local sparse appearance model and k-selection. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 1313–1320 (2011)Google Scholar
  9. 9.
    Liu, B., Yang, L., Huang, J., Meer, P., Gong, L., Kulikowski, C.: Robust and fast collaborative tracking with two stage sparse optimization. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part IV. LNCS, vol. 6314, pp. 624–637. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  10. 10.
    Sznitman, R., Richa, R., Taylor, R.H., Jedynak, B., Hager, G.D.: Unified detection and tracking of instruments during retinal microsurgery. IEEE Transactions on Pattern Analysis and Machine Intelligence 35(5), 1263–1273 (2013)CrossRefGoogle Scholar
  11. 11.
    Kalal, Z., Mikolajczyk, K., Matas, J.: Tracking-learning-detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 34(7), 1409–1422 (2012)CrossRefGoogle Scholar
  12. 12.
    Kalal, Z., Mikolajczyk, K., Matas, J.: Forward-backward error: Automatic detection of tracking failures. In: 20th International Conference on Pattern Recognition (ICPR), pp. 2756–2759 (2010)Google Scholar
  13. 13.
    Baker, S., Matthews, I.: Lucas-kanade 20 years on: A unifying framework. International Journal of Computer Vision 56(3), 221–255 (2004)CrossRefGoogle Scholar
  14. 14.
    Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. I-511–I-518 (2001)Google Scholar
  15. 15.
    Ozuysal, M., Fua, P., Lepetit, V.: Fast keypoint recognition in ten lines of code. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 1–8 (2007)Google Scholar
  16. 16.
    Pickering, M.R., Muhit, A.A., Scarvell, J.M., Smith, P.N.: A new multi-modal similarity measure for fast gradient-based 2d-3d image registration. In: IEEE Engineering in Medicine and Biology Society, EMBC 2009, pp. 5821–5824 (2009)Google Scholar
  17. 17.
    Benhimane, S., Malis, E.: Homography-based 2d visual tracking and servoing. The International Journal of Robotics Research 26(7), 661–676 (2007)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Yeqing Li
    • 1
  • Chen Chen
    • 1
  • Xiaolei Huang
    • 2
  • Junzhou Huang
    • 1
  1. 1.Department of Computer Science and EngineeringUniversity of Texas at ArlingtonArlingtonUSA
  2. 2.Computer Science and Engineering DepartmentLehigh UniversityBethlehemUSA

Personalised recommendations