Advertisement

Maximum Consensus Parameter Estimation by Reweighted \(\ell _1\) Methods

  • Pulak PurkaitEmail author
  • Christopher Zach
  • Anders Eriksson
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10746)

Abstract

Robust parameter estimation in computer vision is frequently accomplished by solving the maximum consensus (MaxCon) problem. Widely used randomized methods for MaxCon, however, can only produce random approximate solutions, while global methods are too slow to exercise on realistic problem sizes. Here we analyse MaxCon as iterative reweighted algorithms on the data residuals. We propose a smooth surrogate function, the minimization of which leads to an extremely simple iteratively reweighted algorithm for MaxCon. We show that our algorithm is very efficient and in many cases, yields the global solution. This makes it an attractive alternative for randomized methods and global optimizers. The convergence analysis of our method and its fundamental differences from the other iteratively reweighted methods are also presented.

Keywords

Reweighted \(\ell _1\) methods Maximum consensus M-estimator 

References

  1. 1.
    Meer, P.: Robust techniques for computer vision. In: Medioni, G., Kang, S.B. (eds.) Emerging Topics in Computer Vision, pp. 107–190. Prentice Hall (2004)Google Scholar
  2. 2.
    Huber, P.J.: Robust Statistics. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-04898-2 CrossRefGoogle Scholar
  3. 3.
    Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Olsson, C., Kahl, F.: Generalized convexity in multiple view geometry. J. Math. Imaging Vis. 38, 35–51 (2010)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24, 381–395 (1981)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Choi, S., Kim, T., Yu, W.: Performance evaluation of RANSAC family. JCV 24, 271–300 (1997)Google Scholar
  7. 7.
    Torr, P.H., Zisserman, A.: MLESAC: a new robust estimator with application to estimating image geometry. CVIU 78, 138–156 (2000)Google Scholar
  8. 8.
    Raguram, R., Frahm, J.-M., Pollefeys, M.: A comparative analysis of RANSAC techniques leading to adaptive real-time random sample consensus. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5303, pp. 500–513. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-88688-4_37 CrossRefGoogle Scholar
  9. 9.
    Raguram, R., Chum, O., Pollefeys, M., Matas, J., Frahm, J.M.: USAC: a universal framework for random sample consensus. IEEE TPAMI 35, 2022–2038 (2013)CrossRefGoogle Scholar
  10. 10.
    Olsson, C., Enqvist, O., Kahl, F.: A polynomial-time bound for matching and registration with outliers. In: CVPR (2008)Google Scholar
  11. 11.
    Enqvist, O., Ask, E., Kahl, F., Åström, K.: Robust fitting for multiple view geometry. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7572, pp. 738–751. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-33718-5_53 CrossRefGoogle Scholar
  12. 12.
    Chin, T.J., Purkait, P., Eriksson, A., Suter, D.: Efficient globally optimal consensus maximisation with tree search. In: CVPR, pp. 2413–2421 (2015)Google Scholar
  13. 13.
    Li, H.: Consensus set maximization with guaranteed global optimality for robust geometry estimation. In: ICCV, pp. 1074–1080. IEEE (2009)Google Scholar
  14. 14.
    Zheng, Y., Sugimoto, S., Okutomi, M.: Deterministically maximizing feasible subsystems for robust model fitting with unit norm constraints. In: CVPR (2011)Google Scholar
  15. 15.
    Chum, O., Matas, J., Kittler, J.: Locally optimized RANSAC. In: Michaelis, B., Krell, G. (eds.) DAGM 2003. LNCS, vol. 2781, pp. 236–243. Springer, Heidelberg (2003).  https://doi.org/10.1007/978-3-540-45243-0_31 CrossRefGoogle Scholar
  16. 16.
    Lebeda, K., Matas, J., Chum, O.: Fixing the locally optimized RANSAC-full experimental evaluation. In: BMVC12. Citeseer (2012)Google Scholar
  17. 17.
    Tordoff, B.J., Murray, D.W.: Guided-MLESAC: faster image transform estimation by using matching priors. IEEE TPAMI 27, 1523–1535 (2005)CrossRefGoogle Scholar
  18. 18.
    Olsson, C., Eriksson, A., Hartley, R.: Outlier removal using duality. In: CVPR (2010)Google Scholar
  19. 19.
    Vanderbei, R.J.: LOQO User’s Manual-version 4.05. Princeton University, Princeton (2006)Google Scholar
  20. 20.
    Wächter, A., Biegler, L.T.: On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Prog. 106, 25–57 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: ICASSP, pp. 3869–3872. IEEE (2008)Google Scholar
  22. 22.
    Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)CrossRefzbMATHGoogle Scholar
  23. 23.
    Kahl, F., Hartley, R.I.: Multiple-view geometry under the \(l_\infty \)-norm. IEEE TPAMI 30, 1603–1617 (2008)CrossRefGoogle Scholar
  24. 24.
    Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted \(\mathbf{l}_1\) minimization. J. Fourier Anal. Appl. 14, 877–905 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  25. 25.
    Gorodnitsky, I.F., Rao, B.D.: Sparse signal reconstruction from limited data using focuss: a re-weighted minimum norm algorithm. TSP 45, 600–616 (1997)Google Scholar
  26. 26.
    Chartrand, R.: Exact reconstruction of sparse signals via nonconvex minimization. IEEE SPL 14, 707–710 (2007)Google Scholar
  27. 27.
    Sriperumbudur, B.K., Lanckriet, G.R.: A proof of convergence of the concave-convex procedure using Zangwill’s theory. Neural Comput. 24, 1391–1407 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  28. 28.
    Tu, L.W.: An Introduction to Manifolds. Springer, New York (2010).  https://doi.org/10.1007/978-1-4419-7400-6 Google Scholar
  29. 29.
    Megiddo, N.: Linear programming in linear time when the dimension is fixed. JACM 31, 114–127 (1984)MathSciNetCrossRefzbMATHGoogle Scholar
  30. 30.
    Ye, Y., Tse, E.: An extension of Karmarkar’s projective algorithm for convex quadratic programming. Math. Prog. 44, 157–179 (1989)MathSciNetCrossRefzbMATHGoogle Scholar
  31. 31.
    Sim, K., Hartley, R.: Removing outliers using the \(l_\infty \) norm. In: CVPR (2006)Google Scholar
  32. 32.
    Wipf, D., Nagarajan, S.: Iterative reweighted and methods for finding sparse solutions. JSTSP 4, 317–329 (2010)Google Scholar
  33. 33.
    Candes, E.J., Tao, T.: Decoding by linear programming. IEEE TIT 51, 4203–4215 (2005)MathSciNetzbMATHGoogle Scholar
  34. 34.
    Grant, M., Boyd, S.: CVX: matlab software for disciplined convex programming (2008)Google Scholar
  35. 35.
    Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2003)zbMATHGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Pulak Purkait
    • 1
    Email author
  • Christopher Zach
    • 1
  • Anders Eriksson
    • 2
  1. 1.Toshiba Research EuropeCambridgeUK
  2. 2.Queensland University of TechnologyBrisbaneAustralia

Personalised recommendations