Pose Estimation and Feature Tracking for Robot Assisted Surgery with Medical Imaging

  • Christophe Doignon
  • Florent Nageotte
  • Benjamin Maurin
  • Alexandre Krupa
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 8)

This chapter presents several 3-D pose estimation algorithms and visual servoingbased tracking with monocular vision systems such as endoscopes and CT scanners (see Fig. 6.1) developped in an attempt to improve the guidance accuracy. These are intended for the 3-D positioning and guidance of surgical instruments in the human body. The efficiency of most of model-based visual servoing approaches relies on correspondences between the position of tracked visual features in the current image and their 3-D attitude in the world space. If these correspondences contain errors then the servoing usually fails or converges towards a wrong position. Overcoming these errors is often achieved by improving the quality of tracking algorithms and features selection methods ([37, 20]). Following this purpose, the work integrates several issues where computational vision can play a role:

  1. 1.

    estimating the distance between the tip of a laparoscopic instrument and the targeted organ with projected collinear feature points

  2. 2.

    estimating the 3-D pose of an instrument using a multiple features tracking and a virtual visual servoing

  3. 3.

    positioning a cylindrical-shaped instrument

  4. 4.

    registering the instantaneous position of a robot using stereotaxy.


The chapter is organized as follows. In the next Section, the problem of the pose estimation of surgical instruments with markers is stated and solved for some degrees of freedom. In Section 3, we focus on the positioning of the symmetry axis of a cylindrical-shaped instrument. Applications of both Sections use endoscopic vision in laparoscopy. The stereotactic registration with a single view (2-D/3-D registration) is studied as a pose estimation problem in Section 4. Finally, a conclusion with some perspectives is drawn in Section 5.


Surgical Instrument Laparoscopic Instrument Visual Servoing Endoscopic Image Perspective Projection 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    A. Casals, J.A., Laporte, E.: Automatic guidance of an assistant robot in laparoscopic surgery. In: IEEE Int’l Conf. on Robotics and Automation, pp. 895-900. Minneapolis, USA (1996)Google Scholar
  2. 2.
    Bouguet, J.Y.: Camera calibration matlab toolbox. MRL - Intel Corp. URL http://$www. vision.caltech.edu/bouguetj/calib-doc/$
  3. 3.
    Bouthemy, P.: A maximum likelihood framework for determining moving edges. IEEE Transactions on Pattern Analysis and Machine Intelligence 11(5), 499-511 (1989)CrossRefGoogle Scholar
  4. 4.
    Breeuwer, M., Zylka, W., Wadley, J., Falk, A.: Detection and correction of geometric distortion in 3D CT/MR images. In: Proc. of Computer Assisted Radiology and Surgery, pp. 11-23. Paris (2002)Google Scholar
  5. 5.
    Burschka, D., Corso, J.J., Dewan, M., Hager, G., Lau, W., Li, M., Lin, H., Marayong, P., Ramey, N.: Navigating inner space: 3-d assistance for minimally invasive surgery. In: Workshop Advances in Robot Vision, in conjunction with the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 67-78. Sendai, Japan (2004)Google Scholar
  6. 6.
    Climent, J., Mars, P.: Automatic instrument localization in laparoscopic surgery. Electronic Letters on Computer Vision and Image Analysis 4(1), 21-31 (2004)Google Scholar
  7. 7.
    Dementhon, D., Davis, L.S.: Model-based object pose in 25 lines of code. IJCV 15(1), 123-141 (1995)CrossRefGoogle Scholar
  8. 8.
    Doignon, C., Graebling, P., de Mathelin, M.: Real-time segmentation of surgical instruments inside the abdominal cavity using a joint hue saturation color feature. Real-Time Imaging 11, 429-442 (2005)CrossRefGoogle Scholar
  9. 9.
    Doignon, C., de Mathelin, M.: A degenerate conic-based method for a direct fitting and 3d pose of cylinders with a single perspective view. In: IEEE Int’l Conf. on Robotics and Automation, pp. 4220-4225. Roma, Italy (2007)Google Scholar
  10. 10.
    Doignon, C., Nageotte, F., de Mathelin, M.: The role of insertion points in the detection and positioning of instruments in laparoscopy for robotic tasks. In: In Proceedings of the Int’l. Conference on Medical Image Computing and Computer-Assisted Intervention - MICCAI, pp. 527-534 (Part I). Copenhaguen, Denmark (2006)Google Scholar
  11. 11.
    Espiau, B., Chaumette, F., Rives, P.: A new approach to visual servoing in robotics. IEEE Trans. Robotics and Automation 8(3), 313-326 (1992)CrossRefGoogle Scholar
  12. 12.
    Goryn, D., Hein, S.: On the estimation of rigid body rotation from noisy data. IEEE Transactions on Pattern Analysis and Machine Intelligence 17(12), 1219-1220 (1995)CrossRefGoogle Scholar
  13. 13.
    Grunert, P., Ma ürer, J., M üller-Forell, W.: Accuracy of stereotactic coordinate transformation using a localisation frame and computed tomographic imaging. Neurosurgery 22, 173-203 (1999)Google Scholar
  14. 14.
    Haralick, R., Lee, C., Ottenberg, K., Nlle, M.: Analysis and solutions of the three-point perspective pose estimation problem. In: IEEE Conf. Computer Vision and Pattern Recognition, pp. 592-598. Maui, Hawai, USA (1991)Google Scholar
  15. 15.
    Haralick, R.M., Shapiro, L.G.: Computer and Robot Vision, vol. 1. Addison-Wesley Publishing (1992)Google Scholar
  16. 16.
    Hartley, R., Zisserman, A.: Multiple view geometry in computer vision. Cambridge Univ. Press (2000)Google Scholar
  17. 17.
    Hong, J., Dohi, T., Hashizume, M., Konishi, K., Hata, N.: An ultrasound-driven needle insertion robot for percutaneous cholecystostomy. Journal of Physics in Med. and Bio. pp. 441-455 (2004)Google Scholar
  18. 18.
    Horaud, R., Conio, B., Leboulleux, O., Lacolle, B.: An analytic solution for the perspective point problem. Computer Vision, Graphics, and Image Processing 47, 33-44 (1989)CrossRefGoogle Scholar
  19. 19.
    Hutchinson, S., Hager, G., Corke, P.: A tutorial on visual servo control. IEEE Trans. Robotics and Automation 12(5), 651-670 (1996)CrossRefGoogle Scholar
  20. 20.
    Kragic, D., Christensen, C.: Cue integration for visual servoing. IEEE Trans. on Robotics and Autom. 17(1), 19-26 (2001)Google Scholar
  21. 21.
    Krupa, A., Chaumette, F.: Control of an ultrasound probe by adaptive visual servoing. In: IEEE/RSJ Int’l. Conf. on Intelligent Robots and Systems, vol. 2, pp. 2007-2012. Edmonton, Canada (2005)CrossRefGoogle Scholar
  22. 22.
    Krupa, A., Gangloff, J., Doignon, C., de Mathelin, M., Morel, G., Leroy, J., Soler, L., Marescaux, J.: Autonomous 3-d positioning of surgical instruments in robotized laparoscopic surgery using visual servoing. IEEE Trans. on Robotics and Automation, special issue on Medical Robotics 19(5), 842-853 (2003)CrossRefGoogle Scholar
  23. 23.
    Lee, S., Fichtinger, G., Chirikjian, G.S.: Numerical algorithms for spatial registration of line fiducials from cross-sectional images. American Ass. of Physicists in Medicine 29(8), 1881-1891 (2002)Google Scholar
  24. 24.
    Marchand, E., Chaumette, F.: Virtual visual servoing: a framework for real-time augmented reality. In: Proceedings of the EUROGRAPHICS Conference, vol. 21 (3 of Computer Graphics Forum), pp. 289-298. Saarbr ücken, Germany (2002)Google Scholar
  25. 25.
    Maurin, B.: Conception et r éalisation d’un robot d’insertion d’aiguille pour les proc édures percutan ées sous imageur scanner. Ph.D. thesis, Louis Pasteur University, France (2005)Google Scholar
  26. 26.
    Maurin, B., Doignon, C., de Mathelin, M., Gangi, A.: Pose reconstruction from an uncalibrated computed tomography imaging device. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Madison, WN, U.S.A. (2003)Google Scholar
  27. 27.
    Maurin, B., Gangloff, J., Bayle, B., de Mathelin, M., Piccin, O., Zanne, P., Doignon, C., Soler, L., Gangi, A.: A parallel robotic system with force sensors for percutaneous procedures under ct-guidance. In: Int’l Conf. on Medical Image Computing and Computer-Assisted Intervention. St Malo, France (2004)Google Scholar
  28. 28.
    Maybank, S.: The cross-ratio and the j-invariant. Geometric invariance in computer vision pp. 107-109 (1992)Google Scholar
  29. 29.
    Nageotte, F.: Contributions à la suture assist ée par ordinateur en chirurgie mini-invasive. Ph.D. thesis, Louis Pasteur University, France (2005)Google Scholar
  30. 30.
    Nageotte, F., Doignon, C., de Mathelin, M., Zanne, P., Soler, L.: Circular needle and needleholder localization for computer-aided suturing in laparoscopic surgery. In: SPIE Medical Imaging, pp. 87-98. San Diego, USA (2005)Google Scholar
  31. 31.
    Nageotte, F., Zanne, P., Doignon, C., de Mathelin, M.: Visual servoing-based endoscopic path following for robot-assisted laparoscopic surgery. In: IEEE/RSJ Int’l Conf. on Intelligent Robots and Systems, pp. 2364-2369. Beijin, China (2006)CrossRefGoogle Scholar
  32. 32.
    Papanikolopoulos, N., Khosla, P., Kanade, T.: Visual tracking of a moving target by a camera mounted on a robot: A combination of control and vision. IEEE Trans. Rob. and Autom. 9(1), 14-35 (1993)CrossRefGoogle Scholar
  33. 33.
    Quan, L., Lan, Z.: Linear n-point camera pose determination. IEEE Transactions on PAMI 21(8) (1999)Google Scholar
  34. 34.
    Sundareswaran, V., Behringer, R.: Visual-servoing-based augmented reality. In: In Proceedings of the IEEE Int’l. Workshop on Augmented Reality. San Francisco, USA (1998)Google Scholar
  35. 35.
    Susil, R.C., Anderson, J.H., Taylor, R.H.: A single image registration method for CT guided interventions. In: Proceedings of the Second Int’l Conf. on Medical Image Computing and Computer-Assisted Intervention, pp. 798-808. Cambridge, UK (1999)CrossRefGoogle Scholar
  36. 36.
    Taylor, R., Funda, J., LaRose, D., Treat, M.: A telerobotic system for augmentation of endoscopic surgery. In: IEEE Int’l Conf. on Engineering in Med. and Bio., pp. 1054-1056. Paris, France (1992)CrossRefGoogle Scholar
  37. 37.
    Tommasini, T., Fusiello, A., Trucco, E., Roberto, V.: Making good features track better. In: IEEE Int’l Conf. on Computer Vision and Pattern Recognition, pp. 178-183. Santa Barbara, USA (1998)Google Scholar
  38. 38.
    Tonet, O., Ramesh, T., Megali, G., Dario, P.: Image analysis-based approach for localization of endoscopic tools. In: Surgetica’04, pp. 221-228. Chambery, France (2005)Google Scholar
  39. 39.
    Tsai, R.: A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the shelf tv cameras and lenses. IEEE Journal of Robotics and Automation 3(4), 323-344 (1987)CrossRefGoogle Scholar
  40. 40.
    Umeyama, S.: Least-squares estimation of transformation parameters between two point patterns. IEEE Transactions on PAMI 13(4), 376-380 (1991)Google Scholar
  41. 41.
    Wang, Y.F., Uecker, D.R., Wang, Y.: A new framework for vision-enabled and robotically assisted minimally invasive surgery. Journal of Computerized Medical Imaging and Graphics 22,429-437 (1998)CrossRefGoogle Scholar
  42. 42.
    Wei, G.Q., Arbter, K., Hirzinger, G.: Automatic tracking of laparoscopic instruments by colorcoding. In: S. Verlag (ed.) Proc. First Int. Joint Conf. CRVMed-MRCAS’97, pp. 357-366. Grenoble, France (1997)Google Scholar
  43. 43.
    Zhang, X., Payandeh, S.: Application of visual tracking for robot-assisted laparoscopic surgery. Journal of Robotic systems 19(7), 315-328 (2002)MATHCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2008

Authors and Affiliations

  • Christophe Doignon
    • 1
  • Florent Nageotte
    • 1
  • Benjamin Maurin
    • 2
  • Alexandre Krupa
    • 3
  1. 1.Control, Vision and Robotics TeamUniversity of StrasbourgFrance
  2. 2.Cerebellum Automation CompanyFrance
  3. 3.IRISA/INRIA RennesCampus de BeaulieuFrance

Personalised recommendations