Advertisement

Model Based Pose Estimation Using SURF

  • Peter Decker
  • Dietrich Paulus
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6469)

Abstract

Estimation of a camera pose (position and orientation) from an image, given a 3d model of the world, is a topic of great interest in many current fields of research. When aiming for a model based pose estimation approach, several questions arise: What is the model? How do we acquire a model? How is the image linked to the model? How is a pose computed and verified using the latter information? In this paper we present a new approach towards model based pose estimation based solely on SURF features. We give a formal definition of our model, show how to build such a model from image data automatically, how to integrate two partial models, and how pose estimation for new images works.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bay, H., Ess, A., Tuytelaars, T., van Gool, L.: Speeded-up robust features (surf). Journal of Computer Vision 110, 346–359 (2008)Google Scholar
  2. 2.
    Zhang, W., Kosecka, J.: Image based localization in urban environments. In: 3DPVT 2006: Proceedings of the Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT 2006), pp. 33–40. IEEE Computer Society, Washington, DC (2006)CrossRefGoogle Scholar
  3. 3.
    Schindler, G., Brown, M., Szeliski, R.: City-scale location recognition. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1–7 (2007)Google Scholar
  4. 4.
    Wu, C., Fraundorfer, F., Frahm, J.M., Pollefeys, M.: 3d model search and pose estimation from single images using vip features. In: Computer Vision and Pattern Recognition Workshop, pp. 1–8 (2008)Google Scholar
  5. 5.
    Snavely, N., Seitz, S.M., Szeliski, R.: Modeling the world from internet photo collections. Int. J. Comput. Vision 80, 189–210 (2008)CrossRefGoogle Scholar
  6. 6.
    Irschara, A., Zach, C., Frahm, J., Bischof, H.: From structure-from-motion point clouds to fast location recognition, pp. 2599–2606 (2009)Google Scholar
  7. 7.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60, 91–110 (2004)CrossRefGoogle Scholar
  8. 8.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM 24, 381–395 (1981)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Hartley, R.I., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, Cambridge (2004)CrossRefzbMATHGoogle Scholar
  10. 10.
    Nistér, D.: An efficient solution to the five-point relative pose problem. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 195–202 (2003)Google Scholar
  11. 11.
    Lourakis, M., Argyros, A.: Sba: A software package for generic sparse bundle adjustment. ACM Trans. Math. Software 36, 1–30 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Lourakis, M.: levmar: Levenberg-marquardt nonlinear least squares algorithms in c/c++ (2004) (accessed on January 31, 2005)Google Scholar
  13. 13.
    Muja, M.: Flann, fast library for approximate nearest neighbors (2009), http://mloss.org/software/view/143/
  14. 14.
    Fiore, P.D.: Efficient linear solution of exterior orientation. IEEE Transactions on Pattern Analysis and Machine Intelligence 23, 140–148 (2001)CrossRefGoogle Scholar
  15. 15.
    Umeyama, S.: Least-Squares Estimation of Transformation Parameters Between Two Point Patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence 13, 376–380 (1991)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Peter Decker
    • 1
  • Dietrich Paulus
    • 1
  1. 1.Active Vision GroupUniversity of Koblenz-LandauKoblenzGermany

Personalised recommendations