Automatic Camera Calibration from a Single Manhattan Image
Abstract
We present a completely automatic method for obtaining the approximate calibration of a camera (alignment to a world frame and focal length) from a single image of an unknown scene, provided only that the scene satisfies a Manhattan world assumption. This assumption states that the imaged scene contains three orthogonal, dominant directions, and is often satisfied by outdoor or indoor views of man-made structures and environments.
The proposed method combines the calibration likelihood introduced in [5] with a stochastic search algorithm to obtain a MAP estimate of the camera’s focal length and alignment. Results on real images of indoor scenes are presented. The calibrations obtained are less accurate than those from standard methods employing a calibration pattern or multiple images. However, the outputs are certainly good enough for common vision tasks such as tracking. Moreover, the results are obtained without any user intervention, from a single image, and without use of a calibration pattern.
Keywords
Focal Length Single Image Importance Sampling Camera Calibration Principal PointReferences
- 1.M. Antone and S. Teller. Automatic recovery of relative camera rotations for urban scenes. In Proc. Conf. Computer Vision and Pattern Recognition, volume 2, pages 282–289, 2000.Google Scholar
- 2.J-Y. Bouguet. Camera calibration for Matlab. Technical report, Intel Corporation, available from http://www.vision.caltech.edu/bouguetj/calib_doc/, 2001.
- 3.B. Caprile and V. Torre. Using vanishing points for camera calibration. Int. J. Computer Vision, 4(2):127–140, 1990.CrossRefGoogle Scholar
- 4.R. Cipolla, T. Drummond, and D. Robertson. Camera calibration from vanishing points in images of architectural scenes. In Proc. British Machine Vision Conference, pages 382–391, 1999.Google Scholar
- 5.J.M. Coughlan and A.L. Yuille. Manhattan world: Compass direction from a single image by Bayesian inference. In Proc. 7th Int. Conf. on Computer Vision, pages 941–947, 1999.Google Scholar
- 6.A. Criminisi, I.D. Reid, and A. Zisserman. Single view metrology. Int. J. Computer Vision, 40(2):123–148, 2000.zbMATHCrossRefGoogle Scholar
- 7.K. Daniilidis and J. Ernst. Active intrinsic calibration using vanishing points. Pattern Recognition Letters, 17(11):1179–1189, 1996.CrossRefGoogle Scholar
- 8.O. Faugeras. 3D Computer Vision. MIT Press, 1993.Google Scholar
- 9.R. Hartley and A. Zisserman. Multiple View Geometry in computer vision. Cambridge University Press, 2000.Google Scholar
- 10.M. Isard and J. MacCormick. BraMBLe: A Bayesian multiple-blob tracker. In Proc. 8th Int. Conf. Computer Vision, volume 2, pages 34–41, 2001.Google Scholar
- 11.K. Kanatani. Statistical analysis of focal-length calibration using vanishing points. IEEE Trans. Robotics and Automation, 8(6), 1992.Google Scholar
- 12.D. Liebowitz and A. Zisserman. Metric rectification for perspective images of planes. In Proc. Conf. Computer Vision and Pattern Recognition, pages 482–488, 1998.Google Scholar
- 13.G. MacLachlan and K. Basford. Mixture Models: Inference and Applications to Clustering. Marcel Dekker, 1988.Google Scholar
- 14.G. McLean and D. Kotturi. Vanishing point detection by line clustering. IEEE Trans. Pattern Analysis and Machine Intelligence, 17(11):1090–1094, 1995.CrossRefGoogle Scholar
- 15.L. Ramshaw. Averaging on a quotient of a sphere, 2002. Technical note 2002-003, Compaq Systems Research Center, Palo Alto, CA.Google Scholar
- 16.R.Y. Tsai. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. of Robotics and Automation, 3(4):323–344, 1987.CrossRefGoogle Scholar
- 17.A. Watt and M. Watt. Advanced Animation and Rendering Techniques. Addison-Wesley, 1992.Google Scholar
- 18.Z. Zhang. A flexible new technique for camera calibration. IEEE Trans. Pattern Analysis and Machine Intelligence, 22(11):1330–1334, 2000.CrossRefGoogle Scholar