Proceedings of 2nd International Conference on Computer Vision & Image Processing pp 289-302 | Cite as
Flexible Threshold Visual Odometry Algorithm Using Fuzzy Logics
Abstract
Visual odometry is a widely known art in the field of computer vision used for the task of estimating rotation and translation between two consecutive time instants. The RANSAC scheme used for outlier rejection incorporates a constant threshold for selecting inliers. The selection of an optimum number of inliers dispersed over the entire image is very important for accurate pose estimation and is decided on the basis of inlier threshold. In this paper, the threshold for inlier classification is adapted with the help of fuzzy logic scheme and varies with the data dynamics. The fuzzy logic is designed with an assumption about the maximum possible camera rotation that can be observed between consequent frames. The proposed methodology has been applied on KITTI dataset, and a comparison has been laid forth between adaptive RANSAC with and without using fuzzy logic with an aim of imparting flexibility to visual odometry algorithm.
Keywords
Visual odometry RANSAC Fuzzy logic NavigationReferences
- 1.Moravec, H.P., Obstacle avoidance and navigation in the real world by a seeing robot rover. 1980, DTIC Document.Google Scholar
- 2.Matthies, L. and S. Shafer, Error modeling in stereo navigation. IEEE Journal on Robotics and Automation, 1987. 3(3): p. 239–248.CrossRefGoogle Scholar
- 3.Longuet-Higgins, H.C., A computer algorithm for reconstructing a scene from two projections. Readings in Computer Vision: Issues, Problems, Principles, and Paradigms, MA Fischler and O. Firschein, eds, 1987: p. 61–62.Google Scholar
- 4.Mouragnon, E., et al. Real time localization and 3d reconstruction. in 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06). 2006.Google Scholar
- 5.Kerl, C., J. Sturm, and D. Cremers. Dense visual SLAM for RGB-D cameras. in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. 2013.Google Scholar
- 6.Hartley, R. and A. Zisserman, Multiple view geometry in computer vision. 2003: Cambridge university press.Google Scholar
- 7.Armangué, X. and J. Salvi, Overall view regarding fundamental matrix estimation. Image and vision computing, 2003. 21(2): p. 205–220.CrossRefGoogle Scholar
- 8.Corke, P., D. Strelow, and S. Singh. Omnidirectional visual odometry for a planetary rover. in Intelligent Robots and Systems, 2004. (IROS 2004). Proceedings. 2004 IEEE/RSJ International Conference on. 2004. IEEE.Google Scholar
- 9.Scaramuzza, D. and R. Siegwart, Appearance-guided monocular omnidirectional visual odometry for outdoor ground vehicles. IEEE transactions on robotics, 2008. 24(5).CrossRefGoogle Scholar
- 10.Nistér, D., O. Naroditsky, and J. Bergen. Visual odometry. in Computer Vision and Pattern Recognition, 2004. CVPR 2004.Google Scholar
- 11.Fang, Z. and Zhang, Y., 2015. Experimental evaluation of RGB-D visual odometry methods. International Journal of Advanced Robotic Systems, 12(3), p. 26.CrossRefGoogle Scholar
- 12.Huang, A.S., Bachrach, A., Henry, P., Krainin, M., Maturana, D., Fox, D. and Roy, N., 2017. Visual odometry and mapping for autonomous flight using an RGB-D camera. In Robotics Research (pp. 235–252). Springer International Publishing.Google Scholar
- 13.Fraundorfer, F. and D. Scaramuzza, Visual odometry: Part i: The first 30 years and fundamentals. IEEE Robotics and Automation Magazine, 2011. 18(4): p. 80–92.CrossRefGoogle Scholar
- 14.Giachetti, A., Matching techniques to compute image motion. Image and Vision Computing, 2000. 18(3): p. 247–260.CrossRefGoogle Scholar
- 15.Horn, B.K., H.M. Hilden, and S. Negahdaripour, Closed-form solution of absolute orientation using orthonormal matrices. JOSA A, 1988. 5(7): p. 1127–1135.MathSciNetCrossRefGoogle Scholar
- 16.Arun, K.S., T.S. Huang, and S.D. Blostein, Least-squares fitting of two 3-D point sets. IEEE Transactions on pattern analysis and machine intelligence, 1987(5): p. 698–700.CrossRefGoogle Scholar
- 17.Umeyama, S., Least-squares estimation of transformation parameters between two point patterns. IEEE Transactions on pattern analysis and machine intelligence, 1991.CrossRefGoogle Scholar
- 18.Haralick, B.M., et al., Review and analysis of solutions of the three point perspective pose estimation problem. International journal of computer vision, 1994. 13(3): p. 331–356.CrossRefGoogle Scholar
- 19.Lourakis, M. and A. Argyros, The design and implementation of a generic sparse bundle adjustment software package based on the levenberg-marquardt algorithm. 2004, Technical Report 340, Institute of Computer Science-FORTH, Heraklion, Crete, Greece.Google Scholar
- 20.Sünderhauf, N., et al., Visual odometry using sparse bundle adjustment on an autonomous outdoor vehicle, in Autonome Mobile Systeme 2005. 2006, Springer. p. 157–163.Google Scholar
- 21.Torr, P.H. and A. Zisserman, MLESAC: A new robust estimator with application to estimating image geometry. Computer Vision and Image Understanding, 2000.Google Scholar
- 22.Rousseeuw, P.J., Least median of squares regression. Journal of the American statistical association, 1984. 79(388): p. 871–880.MathSciNetCrossRefGoogle Scholar
- 23.Fischler, M.A. and R.C. Bolles, Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 1981. 24(6): p. 381–395.MathSciNetCrossRefGoogle Scholar
- 24.Raguram, R., J.-M. Frahm, and M. Pollefeys. A comparative analysis of RANSAC techniques leading to adaptive real-time random sample consensus. in European Conference on Computer Vision. 2008. Springer Berlin Heidelberg.Google Scholar
- 25.Choi, S., T. Kim, and W. Yu, Performance evaluation of RANSAC family. Journal of Computer Vision, 1997. 24(3): p. 271–300.CrossRefGoogle Scholar
- 26.Harris, C.G. and J. Pike, 3D positional integration from image sequences. Image and Vision Computing, 1988. 6(2): p. 87–90.CrossRefGoogle Scholar
- 27.26. Lowe, D.G., Distinctive image features from scale-invariant keypoints. International journal of computer vision, 2004. 60(2): p. 91–110.CrossRefGoogle Scholar
- 28.Bay, H., et al., Speeded-up robust features (SURF). Computer vision and image understanding, 2008. 110(3): p. 346–359.CrossRefGoogle Scholar
- 29.Kitt, B., A. Geiger, and H. Lategahn. Visual odometry based on stereo image sequences with RANSAC-based outlier rejection scheme. in Intelligent Vehicles Symposium. 2010.Google Scholar
- 30.Nannen, V. and G. Oliver. Grid-based Spatial Keypoint Selection for Real Time Visual Odometry. in ICPRAM. 2013.Google Scholar
- 31.Hartley, R.I. and P. Sturm, Triangulation. Computer vision and image understanding, 1997. 68(2): p. 146–157.CrossRefGoogle Scholar
- 32.Carrasco, P.L.N. and G.O. Codina, Visual Odometry Parameters Optimization for Autonomous Underwater Vehicles. Instrumentation viewpoint, 2013(15).Google Scholar
- 33.Geiger, A., P. Lenz, and R. Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. in Computer Vision and Pattern Recognition (CVPR), 2012.Google Scholar
- 34.Mamdani, E.H. and Assilian, S., 1975. An experiment in linguistic synthesis with a fuzzy logic controller. International journal of man-machine studies, 7(1), pp. 1–13.CrossRefGoogle Scholar