Abstract
Up to now, structure from motion algorithms proceeded in two well defined steps, where the first and most important step is recovering the rigid transformation between two views, and the subsequent step is using this transformation to compute the structure of the scene in view. This paper introduces a novel approach to structure from motion in which both aforementioned steps are accomplished in a synergistic manner. Existing approaches to 3D motion estimation are mostly based on the use of optic flow which however poses a problem at the locations of depth discontinuities. If we knew where depth discontinuities were, we could (using a multitude of approaches based on smoothness constraints) estimate accurately flow values for image patches corresponding to smooth scene patches; but to know the discontinuities requires solving the structure from motion problem first. In the past this dilemma has been addressed by improving the estimation of flow through sophisticated optimization techniques, whose performance often depends on the scene in view. In this paper we follow a different approach. The main idea is based on the interaction between 3D motion and shape which allows us to estimate the 3D motion while at the same time segmenting the scene. If we use a wrong 3D motion estimate to compute depth, then we obtain a distorted version of the depth function. The distortion, however, is such that the worse the motion estimate, the more likely we are to obtain depth estimates that are locally unsmooth, i.e., they vary more than the correct ones. Since local variability of depth is due either to the existence of a discontinuity or to a wrong 3D motion estimate, being able to differentiate between these two cases provides the correct motion, which yields the “smoothest” estimated depth as well as the image location of scene discontinuities. Although no optic flow values are computed, we show that our algorithm is very much related to minimizing the epipolar constraint and we present a number of experimental results with real image sequences indicating the robustness of the method.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Download to read the full chapter text
Chapter PDF
References
T. Brodsky, C. Fermüller, and Y. Aloimonos. Directions of motion fields are hardly ever ambiguous. International Journal of Computer Vision, 26:5–24, 1998.
S. Carlsson. Sufficient image structure from motion and shape estimation. In Proc. European Conference on Computer Vision, pages 83–94, Stockholm, Sweden, 1994.
L. Cheong, C. Fermüller, and Y. Aloimonos. Effects of errors in the viewing geometry on shape estimation. Computer Vision and Image Understanding, 1998. In press. Earlier version available as Technical Report CAR-TR-773, June 1996.
O. D. Faugeras. Three-Dimensional Computer Vision. MIT Press, Cambridge, MA, 1992.
C. Fermüller. Navigational preliminaries. In Y. Aloimonos, editor, Active Perception, Advances in Computer Vision, pages 103–150. Lawrence Erlbaum Associates, Hillsdale, NJ, 1993.
C. Fermüller and Y. Aloimonos. Direct perception of three-dimensional motion from patterns of visual motion. Science, 270:1973–1976, 1995.
S. Geman and D. Geman. Stochastic relaxation, Gibbs distribution and Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6:721–741, 1984.
B. K. P. Horn. Motion fields are hardly ever ambiguous. International Journal of Computer Vision, 1:259–274, 1987.
B. K. P. Horn and E. J. Weldon, Jr. Direct methods for recovering motion. International Journal of Computer Vision, 2:51–76, 1988.
J. J. Koenderink and A. J. van Doorn. Affine structure from motion. Journal of the Optical Society of America, 8:377–385, 1991.
Q.-T. Luong and O. D. Faugeras. The fundamental matrix: Theory, algorithms, and stability analysis. International Journal of Computer Vision, 17:43–75, 1996.
J. Marroquin. Probabilistic Solution of Inverse Problems. PhD thesis, Massachusetts Institute of Technology, 1985.
S. J. Maybank. Algorithm for analysing optical flow based on the least-squares method. Image and Vision Computing, 4:38–42, 1986.
S. J. Maybank. A Theoretical Study of Optical Flow. PhD thesis, University of London, England, 1987.
D. Mumford and J. Shah. Boundary detection by minimizing functionals. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 22–25, 1985.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1998 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Brodský, T., Fermüller, C., Aloimonos, Y. (1998). Simultaneous estimation of viewing geometry and structure. In: Burkhardt, H., Neumann, B. (eds) Computer Vision — ECCV'98. ECCV 1998. Lecture Notes in Computer Science, vol 1406. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0055677
Download citation
DOI: https://doi.org/10.1007/BFb0055677
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-64569-6
Online ISBN: 978-3-540-69354-3
eBook Packages: Springer Book Archive