Advertisement

Adaptable Model-Based Tracking Using Analysis-by-Synthesis Techniques

  • Harald Wuest
  • Folker Wientapper
  • Didier Stricker
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4673)

Abstract

In this paper we present a novel analysis-by-synthesis approach for real-time camera tracking in industrial scenarios. The camera pose estimation is based on the tracking of line features which are generated dynamically in every frame by rendering a polygonal model and extracting contours out of the rendered scene. Different methods of the line model generation are investigated. Depending on the scenario and the given 3D model either the image gradient of the frame buffer or discontinuities of the z-buffer and the normal map are used for the generation of a 2D edge map. The 3D control points on a contour are calculated by using the depth value stored in the z-buffer. By aligning the generated features with edges in the current image, the extrinsic parameters of the camera are estimated. The camera pose used for rendering is predicted by a line-based frame-to-frame tracking which takes advantage of the generated edge features. The method is validated and evaluated with the help of ground-truth data as well as real image sequences.

Keywords

Augmented Reality Edge Image Polygonal Model Line Model Generation Material Edge 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Comport, A., Marchand, E., Pressigout, M., Chaumette, F.: Real-time markerless tracking for augmented reality: the virtual visual servoing framework. IEEE Trans. on Visualization and Computer Graphics 12(4), 615–628 (2006)CrossRefGoogle Scholar
  2. 2.
    Wuest, H., Vial, F., Stricker, D.: Adaptive line tracking with multiple hypotheses for augmented reality. In: ISMAR, pp. 62–69 (2005)Google Scholar
  3. 3.
    Lowe, D.G.: Robust model-based motion tracking through the integration of search and estimation. International Journal of Computer Vision 8(2), 113–122 (1992)CrossRefGoogle Scholar
  4. 4.
    Bleser, G., Wuest, H., Stricker, D.: Online camera pose estimation in partially known and dynamic scenes. In: ISMAR, pp. 56–65 (2006)Google Scholar
  5. 5.
    Molton, N.D., Davison, A.J., Reid, I.D.: Locally planar patch features for real-time structure from motion. In: Proc. British Machine Vision Conference (2004)Google Scholar
  6. 6.
    Vacchetti, L., Lepetit, V., Fua, P.: Combining edge and texture information for real-time accurate 3d camera tracking. In: Proceedings of International Symposium on Mixed and Augmented Reality (ISMAR) (2004)Google Scholar
  7. 7.
    Rosten, E., Drummond, T.: Fusing points and lines for high performance tracking. In: IEEE International Conference on Computer Vision, pp. 1508–1511. IEEE Computer Society Press, Los Alamitos (2005)Google Scholar
  8. 8.
    Shi, J., Tomasi, C.: Good features to track. In: CVPR 1994. IEEE Conference on Computer Vision and Pattern Recognition, pp. 593–600. IEEE Computer Society Press, Los Alamitos (1994)Google Scholar
  9. 9.
    Reitmayr, G., Drummond, T.: Going out: Robust model-based tracking for outdoor augmented reality. In: ISMAR, pp. 109–118 (2006)Google Scholar
  10. 10.
    Drummond, T., Cipolla, R.: Real-time tracking of complex structures with on-line camera calibration. In: British Machine Vision Conference, pp. 574–583 (1999)Google Scholar
  11. 11.
    Isenberg, T., Freudenberg, B., Halper, N., Schlechtweg, S., Strothotte, T.: A developer’s guide to silhouette algorithms for polygonal models. IEEE Comput. Graph. Appl. 23(4), 28–37 (2003)CrossRefGoogle Scholar
  12. 12.
    Northrup, J.D., Markosian, L.: Artistic silhouettes: A hybrid approach. In: Proceedings of the First International Symposium on Non-Photorealistic Animation and Rendering (NPAR) for Art and Entertainment (June 2000)Google Scholar
  13. 13.
    Nienhaus, M., Döllner, J.: Edge-enhancement - an algorithm for real-time non-photorealistic rendering. In: WSCG (2003)Google Scholar
  14. 14.
    Hertzmann, A.: Introduction to 3d non-photorealistic rendering: Silhouettes and outlines. In: SIGGRAPH 1999, ACM Press, New York (1999)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Harald Wuest
    • 1
  • Folker Wientapper
    • 2
  • Didier Stricker
    • 2
  1. 1.Centre for Advanced Media Technology (CAMTech), Nanyang Technological University (NTU), 50 Nanyang Avenue, 649812Singapore
  2. 2.Department of Virtual and Augmented Reality, Fraunhofer IGD, TU Darmstadt, GRISGermany

Personalised recommendations