Advertisement

Wald's Sequential Analysis for Time-constrained Vision Problems

  • Jiří Matas
  • Jan Šochman
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 8)

In many decision problems in computer vision, both classification errors and time to decision characterise the quality of an algorithmic solution. This is especially true for applications of vision to robotics where real-time response is typically required.

Time-constrained classification, detection and matching problems can be often formalised in the framework of sequential decision-making. We show how to derive quasi-optimal time-constrained solutions for three different vision problems by applying Wald’s sequential analysis. In particular, we adapt and generalise Wald’s sequential probability ratio test (SPRT) and apply it to the three vision problems: (i) face detection, (ii) real-time detection of distinguished regions (interest points) and (iii) establishing correspondences by the RANSAC algorithm with application e.g. in SLAM, 3D reconstruction and object recognition.

In the face detection problem, we are interested in learning the fastest detector satisfying constraints on false positive and false negative rates.We solve the problem by WaldBoost [15], a combination of Wald’s sequential probability ratio test and AdaBoost learning [2]. The solution can be viewed as a principled way to build a close-to-optimal “cascade of classifiers” [22]. Naturally, the approach is applicable to other classes of objects.

Keywords

Motion Estimation False Negative Rate Interest Point Face Detection Epipolar Geometry 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    A. Casals, J.A., Laporte, E.: Automatic guidance of an assistant robot in laparoscopic surgery. In: IEEE Int’l Conf. on Robotics and Automation, pp. 895-900. Minneapolis, USA (1996)Google Scholar
  2. 2.
    Bouguet, J.Y.: Camera calibration matlab toolbox. MRL - Intel Corp. URL http://$www. vision.caltech.edu/bouguetj/calib-doc/$
  3. 3.
    Bouthemy, P.: A maximum likelihood framework for determining moving edges. IEEE Transactions on Pattern Analysis and Machine Intelligence 11(5), 499-511 (1989)CrossRefGoogle Scholar
  4. 4.
    Breeuwer, M., Zylka, W., Wadley, J., Falk, A.: Detection and correction of geometric distortion in 3D CT/MR images. In: Proc. of Computer Assisted Radiology and Surgery, pp. 11-23. Paris (2002)Google Scholar
  5. 5.
    Burschka, D., Corso, J.J., Dewan, M., Hager, G., Lau, W., Li, M., Lin, H., Marayong, P., Ramey, N.: Navigating inner space: 3-d assistance for minimally invasive surgery. In: Workshop Advances in Robot Vision, in conjunction with the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 67-78. Sendai, Japan (2004)Google Scholar
  6. 6.
    Climent, J., Mars, P.: Automatic instrument localization in laparoscopic surgery. Electronic Letters on Computer Vision and Image Analysis 4(1), 21-31 (2004)Google Scholar
  7. 7.
    Dementhon, D., Davis, L.S.: Model-based object pose in 25 lines of code. IJCV 15(1), 123-141 (1995)CrossRefGoogle Scholar
  8. 8.
    Doignon, C., Graebling, P., de Mathelin, M.: Real-time segmentation of surgical instruments inside the abdominal cavity using a joint hue saturation color feature. Real-Time Imaging 11, 429-442 (2005)CrossRefGoogle Scholar
  9. 9.
    Doignon, C., de Mathelin, M.: A degenerate conic-based method for a direct fitting and 3d pose of cylinders with a single perspective view. In: IEEE Int’l Conf. on Robotics and Automation, pp. 4220-4225. Roma, Italy (2007)Google Scholar
  10. 10.
    Doignon, C., Nageotte, F., de Mathelin, M.: The role of insertion points in the detection and positioning of instruments in laparoscopy for robotic tasks. In: In Proceedings of the Int’l. Conference on Medical Image Computing and Computer-Assisted Intervention - MICCAI, pp. 527-534 (Part I). Copenhaguen, Denmark (2006)Google Scholar
  11. 11.
    Espiau, B., Chaumette, F., Rives, P.: A new approach to visual servoing in robotics. IEEE Trans. Robotics and Automation 8(3), 313-326 (1992)CrossRefGoogle Scholar
  12. 12.
    Goryn, D., Hein, S.: On the estimation of rigid body rotation from noisy data. IEEE Transactions on Pattern Analysis and Machine Intelligence 17(12), 1219-1220 (1995)CrossRefGoogle Scholar
  13. 13.
    Grunert, P., Ma ürer, J., M üller-Forell, W.: Accuracy of stereotactic coordinate transformation using a localisation frame and computed tomographic imaging. Neurosurgery 22, 173-203 (1999)Google Scholar
  14. 14.
    Haralick, R., Lee, C., Ottenberg, K., Nlle, M.: Analysis and solutions of the three-point perspective pose estimation problem. In: IEEE Conf. Computer Vision and Pattern Recognition, pp. 592-598. Maui, Hawai, USA (1991)Google Scholar
  15. 15.
    Haralick, R.M., Shapiro, L.G.: Computer and Robot Vision, vol. 1. Addison-Wesley Publishing (1992)Google Scholar
  16. 16.
    Hartley, R., Zisserman, A.: Multiple view geometry in computer vision. Cambridge Univ. Press (2000)Google Scholar
  17. 17.
    Hong, J., Dohi, T., Hashizume, M., Konishi, K., Hata, N.: An ultrasound-driven needle insertion robot for percutaneous cholecystostomy. Journal of Physics in Med. and Bio. pp. 441-455 (2004)Google Scholar
  18. 18.
    Horaud, R., Conio, B., Leboulleux, O., Lacolle, B.: An analytic solution for the perspective point problem. Computer Vision, Graphics, and Image Processing 47, 33-44 (1989)CrossRefGoogle Scholar
  19. 19.
    Hutchinson, S., Hager, G., Corke, P.: A tutorial on visual servo control. IEEE Trans. Robotics and Automation 12(5), 651-670 (1996)CrossRefGoogle Scholar
  20. 20.
    Kragic, D., Christensen, C.: Cue integration for visual servoing. IEEE Trans. on Robotics and Autom. 17(1), 19-26 (2001)Google Scholar
  21. 21.
    Krupa, A., Chaumette, F.: Control of an ultrasound probe by adaptive visual servoing. In: IEEE/RSJ Int’l. Conf. on Intelligent Robots and Systems, vol. 2, pp. 2007-2012. Edmonton, Canada (2005)CrossRefGoogle Scholar
  22. 22.
    Krupa, A., Gangloff, J., Doignon, C., de Mathelin, M., Morel, G., Leroy, J., Soler, L., Marescaux, J.: Autonomous 3-d positioning of surgical instruments in robotized laparoscopic surgery using visual servoing. IEEE Trans. on Robotics and Automation, special issue on Medical Robotics 19(5), 842-853 (2003)CrossRefGoogle Scholar
  23. 23.
    Lee, S., Fichtinger, G., Chirikjian, G.S.: Numerical algorithms for spatial registration of line fiducials from cross-sectional images. American Ass. of Physicists in Medicine 29(8), 1881-1891 (2002)Google Scholar
  24. 24.
    Marchand, E., Chaumette, F.: Virtual visual servoing: a framework for real-time augmented reality. In: Proceedings of the EUROGRAPHICS Conference, vol. 21 (3 of Computer Graphics Forum), pp. 289-298. Saarbr ücken, Germany (2002)Google Scholar
  25. 25.
    Maurin, B.: Conception et r éalisation d’un robot d’insertion d’aiguille pour les proc édures percutan ées sous imageur scanner. Ph.D. thesis, Louis Pasteur University, France (2005)Google Scholar
  26. 26.
    Maurin, B., Doignon, C., de Mathelin, M., Gangi, A.: Pose reconstruction from an uncalibrated computed tomography imaging device. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Madison, WN, U.S.A. (2003)Google Scholar
  27. 27.
    Maurin, B., Gangloff, J., Bayle, B., de Mathelin, M., Piccin, O., Zanne, P., Doignon, C., Soler, L., Gangi, A.: A parallel robotic system with force sensors for percutaneous procedures under ct-guidance. In: Int’l Conf. on Medical Image Computing and Computer-Assisted Intervention. St Malo, France (2004)Google Scholar
  28. 28.
    Maybank, S.: The cross-ratio and the j-invariant. Geometric invariance in computer vision pp. 107-109 (1992)Google Scholar
  29. 29.
    Nageotte, F.: Contributions à la suture assist ée par ordinateur en chirurgie mini-invasive. Ph.D. thesis, Louis Pasteur University, France (2005)Google Scholar
  30. 30.
    Nageotte, F., Doignon, C., de Mathelin, M., Zanne, P., Soler, L.: Circular needle and needleholder localization for computer-aided suturing in laparoscopic surgery. In: SPIE Medical Imaging, pp. 87-98. San Diego, USA (2005)Google Scholar
  31. 31.
    Nageotte, F., Zanne, P., Doignon, C., de Mathelin, M.: Visual servoing-based endoscopic path following for robot-assisted laparoscopic surgery. In: IEEE/RSJ Int’l Conf. on Intelligent Robots and Systems, pp. 2364-2369. Beijin, China (2006)CrossRefGoogle Scholar
  32. 32.
    Papanikolopoulos, N., Khosla, P., Kanade, T.: Visual tracking of a moving target by a camera mounted on a robot: A combination of control and vision. IEEE Trans. Rob. and Autom. 9(1), 14-35 (1993)CrossRefGoogle Scholar
  33. 33.
    Quan, L., Lan, Z.: Linear n-point camera pose determination. IEEE Transactions on PAMI 21(8) (1999)Google Scholar
  34. 34.
    Sundareswaran, V., Behringer, R.: Visual-servoing-based augmented reality. In: In Proceedings of the IEEE Int’l. Workshop on Augmented Reality. San Francisco, USA (1998)Google Scholar
  35. 35.
    Susil, R.C., Anderson, J.H., Taylor, R.H.: A single image registration method for CT guided interventions. In: Proceedings of the Second Int’l Conf. on Medical Image Computing and Computer-Assisted Intervention, pp. 798-808. Cambridge, UK (1999)CrossRefGoogle Scholar
  36. 36.
    Taylor, R., Funda, J., LaRose, D., Treat, M.: A telerobotic system for augmentation of endoscopic surgery. In: IEEE Int’l Conf. on Engineering in Med. and Bio., pp. 1054-1056. Paris, France (1992)CrossRefGoogle Scholar
  37. 37.
    Tommasini, T., Fusiello, A., Trucco, E., Roberto, V.: Making good features track better. In: IEEE Int’l Conf. on Computer Vision and Pattern Recognition, pp. 178-183. Santa Barbara, USA (1998)Google Scholar
  38. 38.
    Tonet, O., Ramesh, T., Megali, G., Dario, P.: Image analysis-based approach for localization of endoscopic tools. In: Surgetica’04, pp. 221-228. Chambery, France (2005)Google Scholar
  39. 39.
    Tsai, R.: A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the shelf tv cameras and lenses. IEEE Journal of Robotics and Automation 3(4), 323-344 (1987)CrossRefGoogle Scholar
  40. 40.
    Umeyama, S.: Least-squares estimation of transformation parameters between two point patterns. IEEE Transactions on PAMI 13(4), 376-380 (1991)Google Scholar
  41. 41.
    Wang, Y.F., Uecker, D.R., Wang, Y.: A new framework for vision-enabled and robotically assisted minimally invasive surgery. Journal of Computerized Medical Imaging and Graphics 22,429-437 (1998)CrossRefGoogle Scholar
  42. 42.
    Wei, G.Q., Arbter, K., Hirzinger, G.: Automatic tracking of laparoscopic instruments by colorcoding. In: S. Verlag (ed.) Proc. First Int. Joint Conf. CRVMed-MRCAS’97, pp. 357-366. Grenoble, France (1997)Google Scholar
  43. 43.
    Zhang, X., Payandeh, S.: Application of visual tracking for robot-assisted laparoscopic surgery. Journal of Robotic systems 19(7), 315-328 (2002)MATHCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2008

Authors and Affiliations

  • Jiří Matas
    • 1
  • Jan Šochman
    • 1
  1. 1.Department of CyberneticsCzech Technical University in PragueCzech Republic

Personalised recommendations