Advertisement

Abstract

A growing number of medical datasets now contain both a spatial and a temporal dimension. Trajectories, from tools or body features, are thus becoming increasingly important for their analysis. In this paper, we are interested in recovering the spatial and temporal differences between trajectories coming from different datasets. In particular, we address the case of surgical gestures, where trajectories contain both spatial transformations and speed differences in the execution. We first define the spatio-temporal registration problem between multiple trajectories. We then propose an optimization method to jointly recover both the rigid spatial motions and the non-linear time warpings. The optimization generates also a generic trajectory template, in which spatial and temporal differences have been factored out. This approach can be potentially used to register and compare gestures side-by-side for training sessions, to build gesture trajectory models for automation by a robot, or to register the trajectories of natural or artificial markers which follow similar motions. We demonstrate its usefulness with synthetic and real experiments. In particular, we register and analyze complex surgical gestures performed by tele-manipulation using the da Vinci robot.

References

  1. 1.
    Blum, T., Sielhorst, T., Navab, N.: Advanced Augmented Reality Feedback for Teaching 3D Tool Manipulation, ch. 25, pp. 223–236. Lupiensis Biomedical Publications (2007)Google Scholar
  2. 2.
    Lin, H.C., Shafran, I., Murphy, T.E., Okamura, A.M., Yuh, D.D., Hager, G.D.: Automatic detection and segmentation of robot-assisted surgical motions. In: Duncan, J.S., Gerig, G. (eds.) MICCAI 2005. LNCS, vol. 3749, pp. 802–810. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  3. 3.
    Varadarajan, B., Reiley, C., Lin, H., Khudanpur, S., Hager, G.: Data-derived models for segmentation with application to surgical assessment and training. In: Yang, G.-Z., Hawkes, D., Rueckert, D., Noble, A., Taylor, C. (eds.) MICCAI 2009. LNCS, vol. 5761, pp. 426–434. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  4. 4.
    Padoy, N., Blum, T., Ahmadi, A., Feussner, H., Berger, M.-O., Navab, N.: Statistical modeling and recognition of surgical workflow. Medical Image Analysis (2010), doi:10.1016/j.media.2010.10.001Google Scholar
  5. 5.
    van den Berg, J., Miller, S., Duckworth, D., Hu, H., Wan, A., Fu, X.-Y., Goldberg, K., Abbeel, P.: Superhuman performance of surgical tasks by robots using iterative learning from human-guided demonstrations. In: ICRA 2010, pp. 2074–2081 (2010)Google Scholar
  6. 6.
    Sakoe, H., Chiba, S.: Dynamic programming algorithm optimization for spoken word recognition. IEEE Trans. Acoust. Speech Signal Process. 26(1), 43–49 (1978)CrossRefzbMATHGoogle Scholar
  7. 7.
    Umeyama, S.: Least-squares estimation of transformation parameters between two point patterns. IEEE Trans. Pattern Anal. Mach. Intell. 13, 376–380 (1991)CrossRefGoogle Scholar
  8. 8.
    Zhang, Z.: Iterative point matching for registration of free-form curves and surfaces. Int. J. Comput. Vision 13, 119–152 (1994)CrossRefGoogle Scholar
  9. 9.
    Zhou, F., De la Torre, F.: Canonical time warping for alignment of human behavior. In: Advances in Neural Information Processing Systems Conference, NIPS (2009)Google Scholar
  10. 10.
    Wang, K., Gasser, T.: Alignment of curves by dynamic time warping. Annals of Statistics 25(3), 1251–1276 (1997)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Nicolas Padoy
    • 1
  • Gregory D. Hager
    • 1
  1. 1.The Johns Hopkins UniversityBaltimoreUSA

Personalised recommendations