Abstract
A very general approach to optimization are local search methods, where one or, more typically, multiple variables to be optimized are allowed to take continuous values. There exist numerous approaches aiming at finding the set of values which optimize (i.e., minimize or maximize) a certain objective function. The most straightforward way is possible if the objective function is a quadratic form, because then taking its first derivative and setting it to zero leads to a linear system of equations, which can be solved in one step. This proceeding is employed in linear least squares regression, e.g., where observed data is to be approximated by a function being linear in the design variables. More general functions can be tackled by iterative schemes, where the two steps of first specifying a promising search direction and subsequently performing a one-dimensional optimization along this direction are repeated iteratively until convergence. These methods can be categorized according to the extent of information about the derivatives of the objective function they utilize into zero-order, first-order, and second-order methods. Schemes for both steps of the general proceeding are treated in this chapter.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Björck A (1996) Numerical methods for least squares problems. Society for Industrial and Applied Mathematics, Philadelphia. ISBN 978-0-898-71360-2
Bonnans J-F, Gilbert JC, Lemarechal C, Sagastizábal CA (2006) Numerical optimization – theoretical and practical aspects, 2nd edn. Springer, Berlin\New York. ISBN 978-3-540-35445-1
Boyd S, Vandenberghe L (2004) Convex optimization, 2nd edn. Cambridge University Press, Cambridge\New York. ISBN 978-0-521-83378-3
Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. Comput Vis Pattern Recognit 1:886–893
Domokos C, Nemeth J, Kato Z (2012) Nonlinear shape registration without correspondences. IEEE Trans Pattern Anal Mach Intell 34(5):943–958
Golub GH, Van Loan CF (1996) Matrix computations, 3rd edn. Johns Hopkins University Press, Baltimore. ISBN 0-8018-5414-8
Hager WW, Zhang H (2006) A survey of nonlinear conjugate gradient methods. Pac J Optimiz 2(1):35–58
Hartley R, Zisserman A (2004) Multiple view geometry in computer vision, 2nd edn. Cambridge University Press, Cambridge. ISBN 978–0521540513
Kirkpatrick S, Gelatt CD, Vecchi MP (1983) Optimization by simulated annealing. Science 220(4598):671–680
Levenberg K (1944) A method for the solution of certain non-linear problems in least squares. Q Appl Math 2:164–168
Lowe DG (2004) Distinctive image features from scale-invariant viewpoints. Int J Comput Vis 60:91–111
Marquardt D (1963) An algorithm for least-squares estimation of nonlinear parameters. SIAM J Appl Math 11(2):431–441
Shewchuck JR (1994) An introduction to the conjugate gradient method without the agonizing pain. Technical Report Carnegie Mellon University, Pittsburgh
Vanderplaats GN (1984) Numerical optimization techniques for engineering design. McGraw Hill, New York. ISBN 0-07-066964-3
Vijnhoven R, de Width P (2010) Fast training of object detection using stochastic gradient descent. In: 20th international conference on pattern recognition, Washington DC. pp 424–427
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag London
About this chapter
Cite this chapter
Treiber, M.A. (2013). Continuous Optimization. In: Optimization for Computer Vision. Advances in Computer Vision and Pattern Recognition. Springer, London. https://doi.org/10.1007/978-1-4471-5283-5_2
Download citation
DOI: https://doi.org/10.1007/978-1-4471-5283-5_2
Published:
Publisher Name: Springer, London
Print ISBN: 978-1-4471-5282-8
Online ISBN: 978-1-4471-5283-5
eBook Packages: Computer ScienceComputer Science (R0)