Advertisement

Real-Time Descriptorless Feature Tracking

  • Antonio L. Rodríguez
  • Pedro E. López-de-Teruel
  • Alberto Ruiz
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5716)

Abstract

This paper presents a simple and efficient estimator of long-term sparse optical flow. It is supported by a novel approach to feature tracking, essentially based on global coherence of local movements. Expensive invariant appearance descriptors are not required: the locations of salient points in successive frames provide enough information to create a large number of accurate and stable tracking histories which remain alive for significantly long times. Hence, wide-baseline matching can be achieved both in extremely regular scenes and in cases in which corresponding points are photometrically very different. Our experiments show that this method is able to robustly maintain in real time hundreds of trajectories in long video sequences using a standard computer.

Keywords

Motion Model Salient Point Successive Frame Interest Point Detector Candidate Matchings 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Schmid, C., Mohr, R., Bauckhage, C.: Evaluation of interest point detectors. International Journal of Computer Vision 37(2), 151–172 (2000)CrossRefzbMATHGoogle Scholar
  2. 2.
    Shi, J., Tomasi, C.: Good features to track. Proc. of IEEE Computer Vision and Pattern Recognition Conference, 593–600 (1994)Google Scholar
  3. 3.
    Pollefeys, M., VanGool, L.: Visual modeling: from images to images. Journal of Visualization and Computer Animation 13, 199–209 (2002)CrossRefzbMATHGoogle Scholar
  4. 4.
    Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, Cambridge (2003)zbMATHGoogle Scholar
  5. 5.
    Davison, A.J., Molton, N.D.: MonoSLAM: Real-time single camera SLAM. IEEE Trans. on PAMI 29(6), 1052–1067 (2007)CrossRefGoogle Scholar
  6. 6.
    Rosten, E., Drummond, T.: Fusing points and lines for high performance tracking. In: Proc. of the IEEE ICCV conference, pp. 1508–1515 (2005)Google Scholar
  7. 7.
    Lowe, D.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 20, 91–110 (2003)Google Scholar
  8. 8.
    Matas, J., Chum, O., Urban, M., Pajdla, T.: Robust wide baseline stereo from maximally stable extremal regions. In: Proc. of the BMVC, pp. 384–393 (2002)Google Scholar
  9. 9.
    Heymann, S., Maller, K., Smolic, A., Froehlich, B., Wiegand, T.: SIFT implementation and optimization for general-purpose GPU. In: Proc. of Int. Conf. in Central Europe on Computer Graphics, Visualization and Computer Vision (2007)Google Scholar
  10. 10.
    Se, S., Nge, H., Jasiobedzki, P., Moyung, T.: Vision based modeling and localization for planetary exploration rovers. In: Proc. of Int. Astronautical Congress (2004)Google Scholar
  11. 11.
    Chum, O., Matas, J.: Matching with PROSAC - Progressive sample consensus. In: Proc. of the IEEE CVPR, pp. 220–226 (2005)Google Scholar
  12. 12.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM 24(6), 381–395 (1981)MathSciNetCrossRefGoogle Scholar
  13. 13.
    PARP Group homepage (2009), http://perception.inf.um.es/tracker/

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Antonio L. Rodríguez
    • 1
  • Pedro E. López-de-Teruel
    • 2
  • Alberto Ruiz
    • 2
  1. 1.Dpto. Ingeniería y Tecnología de ComputadoresUniversity of MurciaSpain
  2. 2.Dpto. Lenguajes y SistemasUniversity of MurciaSpain

Personalised recommendations