Tracking of Miniature-Sized Objects in 3D Endoscopic Vision

Conference paper
Part of the Smart Innovation, Systems and Technologies book series (SIST, volume 88)

Abstract

The advent of 3D endoscope has revolutionized the field of industrial and medical inspection. It allows visual examination of inaccessible areas like underground pipes and human cavity. Miniature-sized objects like kidney stone and industrial waste products like slags can easily be monitored using 3D endoscope. In this paper, we present a technique to track small objects in 3D endoscopic vision using feature detectors. The proposed methodology uses the input of the operator to segment the target in order to extract reliable and stable features. Grow-cut algorithm is used for interactive segmentation to segment the object in one of the frames and later on, sparse correspondence is performed using SURF feature detectors. SURF feature detection based tracking algorithm is extended to track the object in the stereo endoscopic frames. The evaluation of the proposed technique is done by quantitatively analyzing its performance in two ex vivo environment and subjecting the target to various conditions like deformation, change in illumination, and scale and rotation transformation due to movement of endoscope.

Keywords

Stereo endoscopic vision Object tracking Kidney stones Feature detection 

Notes

Acknowledgements

This research work was financially supported by the CSIR-Network Project, “Advanced Instrumentation Solutions for Health Care and Agro-based Applications (ASHA)”. The authors would like to acknowledge the Director, CSIR-Central Electronics Engineering Research Institute for his valuable guidance and continuous support. The authors would also extend their gratification to Birla Sarvajanik Hospital, Pilani for providing kidney stones for experiments.

References

  1. 1.
    Allan, M., Ourselin, S., Thompson, S., Hawkes, D.J., Kelly, J., Stoyanov, D.: Toward detection and localization of instruments in minimally invasive surgery. IEEE Trans. Biomed. Eng. 60(4), 1050–1058 (2013)CrossRefGoogle Scholar
  2. 2.
    Baumhauer, M., Feuerstein, M., Meinzer, H.P., Rassweiler, J.: Navigation in endoscopic soft tissue surgery: perspectives and limitations. J. Endourol. 22(4), 751–766 (2008)CrossRefGoogle Scholar
  3. 3.
    Bay, H., Tuytelaars, T., Van Gool, L.: Surf: speeded up robust features. In: Computer Vision–ECCV 2006, pp. 404–417. Springer, (2006)Google Scholar
  4. 4.
    Bouguet, J.Y.: Camera calibration. Toolbox for Matlab (22 Nov 2010). Available from: http://www.vision.caltech.edu/bouguetj/calib_doc/index.html
  5. 5.
    Chen, Z., Jiang, Z., Gui, W., Yang, C.: A novel device for optical imaging of blast furnace burden surface: parallel low-light-loss backlight high-temperature industrial endoscope. IEEE Sens. J. 16(17), 6703–6717 (2016)CrossRefGoogle Scholar
  6. 6.
    Ghosh, P., Antani, S.K., Long, L.R., Thoma, G.R.: Unsupervised grow-cut: cellular automata-based medical image segmentation. In: 2011 First IEEE International Conference on Healthcare Informatics, Imaging and Systems Biology (HISB), pp. 40–47. IEEE, (2011)Google Scholar
  7. 7.
    Giannarou, S., Visentini-Scarzanella, M., Yang, G.Z.: Affine-invariant anisotropic detector for soft tissue tracking in minimally invasive surgery. In: IEEE International Symposium on Biomedical Imaging: From Nano to Macro, ISBI’09, pp. 1059–1062. IEEE, (2009)Google Scholar
  8. 8.
    Giannarou, S., Visentini-Scarzanella, M., Yang, G.Z.: Probabilistic tracking of affine-invariant anisotropic regions. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 130–143 (2013)CrossRefGoogle Scholar
  9. 9.
    Harris, C., Stephens, M.: A combined corner and edge detector. In: Alvey Vision Conference, Citeseer, vol. 15, p. 50. (1988)Google Scholar
  10. 10.
    Hartley, R.I.: Theory and practice of projective rectification. Int. J. Comput. Vis. 35(2), 115–127 (1999)CrossRefGoogle Scholar
  11. 11.
    Khanam, Z., Soni, P., Raheja, J.L., et al.: Development of 3D high definition endoscope system. In: Information Systems Design and Intelligent Applications, pp. 181–189. Springer, (2016)Google Scholar
  12. 12.
    Lowe, D.G.: Object recognition from local scale-invariant features. In: The Proceedings of the Seventh IEEE International Conference on Computer Vision, vol. 2, pp. 1150–1157. IEEE, (1999)Google Scholar
  13. 13.
    Matas, J., Chum, O., Urban, M., Pajdla, T.: Robust wide-baseline stereo from maximally stable extremal regions. Image Vis. Comput. 22(10), 761–767 (2004)CrossRefGoogle Scholar
  14. 14.
    Mikolajczyk, K., Schmid, C.: Scale and affine invariant interest point detectors. Int. J. Comput. Vis. 60(1), 63–86 (2004)CrossRefGoogle Scholar
  15. 15.
    Mikolajczyk, K., Tuytelaars, T., Schmid, C., Zisserman, A., Matas, J., Schaffalitzky, F., Kadir, T., Van Gool, L.: A comparison of affine region detectors. Int. J. Comput. Vis. 65(1–2), 43–72 (2005)CrossRefGoogle Scholar
  16. 16.
    Muja, M., Lowe, D.G.: Fast approximate nearest neighbors with automatic algorithm configuration. VISAPP 1(2), 331–340 (2009)Google Scholar
  17. 17.
    Nayyar, R., Singh, P., Gupta, N.P.: Robot-assisted laparoscopic pyeloplasty with stone removal in an ectopic pelvic kidney. JSLS: J. Soc. Laparoendosc. Surg. 14(1), 130 (2010)Google Scholar
  18. 18.
    Puerto-Souza, G.A., Mariottini, G.L.: A fast and accurate feature-matching algorithm for minimally-invasive endoscopic images. IEEE Trans. Med. Imaging 32(7), 1201–1214 (2013)CrossRefGoogle Scholar
  19. 19.
    Richa, R., Bó, A.P., Poignet, P.: Robust 3D visual tracking for robotic-assisted cardiac interventions. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI, pp. 267–274. Springer, (2010)Google Scholar
  20. 20.
    Stoyanov, D.: Surgical vision. Ann. Biomed. Eng. 40(2), 332–345 (2012)CrossRefGoogle Scholar
  21. 21.
    Stoyanov, D., Mylonas, G.P., Deligianni, F., Darzi, A., Yang, G.Z.: Soft-tissue motion tracking and structure estimation for robotic assisted mis procedures. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI, pp. 139–146. Springer, (2005)Google Scholar
  22. 22.
    Ta, D.N., Chen, W.C., Gelfand, N., Pulli, K.: Surftrac: efficient tracking and continuous object recognition using local feature descriptors. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009, pp. 2937–2944. IEEE, (2009)Google Scholar
  23. 23.
    Tuytelaars, T., Van Gool, L.: Matching widely separated views based on affine invariant regions. Int. J. Comput. Vis. 59(1), 61–85 (2004)CrossRefGoogle Scholar
  24. 24.
    Vezhnevets, V., Konouchine, V.: Growcut: interactive multi-label ND image segmentation by cellular automata. In: Proceedings of the Graphicon, Citeseer, pp. 150–156. (2005)Google Scholar
  25. 25.
    Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000)CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  1. 1.Department of Computer EngineeringAligarh Muslim UniversityAligarhIndia
  2. 2.CSIR-Central Electronics Engineering Research InstitutePilaniIndia

Personalised recommendations