Skip to main content

Continuous Optimization

  • Chapter
  • First Online:
Optimization for Computer Vision

Part of the book series: Advances in Computer Vision and Pattern Recognition ((ACVPR))

  • 3533 Accesses

Abstract

A very general approach to optimization are local search methods, where one or, more typically, multiple variables to be optimized are allowed to take continuous values. There exist numerous approaches aiming at finding the set of values which optimize (i.e., minimize or maximize) a certain objective function. The most straightforward way is possible if the objective function is a quadratic form, because then taking its first derivative and setting it to zero leads to a linear system of equations, which can be solved in one step. This proceeding is employed in linear least squares regression, e.g., where observed data is to be approximated by a function being linear in the design variables. More general functions can be tackled by iterative schemes, where the two steps of first specifying a promising search direction and subsequently performing a one-dimensional optimization along this direction are repeated iteratively until convergence. These methods can be categorized according to the extent of information about the derivatives of the objective function they utilize into zero-order, first-order, and second-order methods. Schemes for both steps of the general proceeding are treated in this chapter.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Björck A (1996) Numerical methods for least squares problems. Society for Industrial and Applied Mathematics, Philadelphia. ISBN 978-0-898-71360-2

    Book  MATH  Google Scholar 

  2. Bonnans J-F, Gilbert JC, Lemarechal C, Sagastizábal CA (2006) Numerical optimization – theoretical and practical aspects, 2nd edn. Springer, Berlin\New York. ISBN 978-3-540-35445-1

    MATH  Google Scholar 

  3. Boyd S, Vandenberghe L (2004) Convex optimization, 2nd edn. Cambridge University Press, Cambridge\New York. ISBN 978-0-521-83378-3

    MATH  Google Scholar 

  4. Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. Comput Vis Pattern Recognit 1:886–893

    Google Scholar 

  5. Domokos C, Nemeth J, Kato Z (2012) Nonlinear shape registration without correspondences. IEEE Trans Pattern Anal Mach Intell 34(5):943–958

    Article  Google Scholar 

  6. Golub GH, Van Loan CF (1996) Matrix computations, 3rd edn. Johns Hopkins University Press, Baltimore. ISBN 0-8018-5414-8

    MATH  Google Scholar 

  7. Hager WW, Zhang H (2006) A survey of nonlinear conjugate gradient methods. Pac J Optimiz 2(1):35–58

    MathSciNet  MATH  Google Scholar 

  8. Hartley R, Zisserman A (2004) Multiple view geometry in computer vision, 2nd edn. Cambridge University Press, Cambridge. ISBN 978–0521540513

    Book  MATH  Google Scholar 

  9. Kirkpatrick S, Gelatt CD, Vecchi MP (1983) Optimization by simulated annealing. Science 220(4598):671–680

    Article  MathSciNet  MATH  Google Scholar 

  10. Levenberg K (1944) A method for the solution of certain non-linear problems in least squares. Q Appl Math 2:164–168

    MathSciNet  MATH  Google Scholar 

  11. Lowe DG (2004) Distinctive image features from scale-invariant viewpoints. Int J Comput Vis 60:91–111

    Article  Google Scholar 

  12. Marquardt D (1963) An algorithm for least-squares estimation of nonlinear parameters. SIAM J Appl Math 11(2):431–441

    Article  MathSciNet  MATH  Google Scholar 

  13. Shewchuck JR (1994) An introduction to the conjugate gradient method without the agonizing pain. Technical Report Carnegie Mellon University, Pittsburgh

    Google Scholar 

  14. Vanderplaats GN (1984) Numerical optimization techniques for engineering design. McGraw Hill, New York. ISBN 0-07-066964-3

    MATH  Google Scholar 

  15. Vijnhoven R, de Width P (2010) Fast training of object detection using stochastic gradient descent. In: 20th international conference on pattern recognition, Washington DC. pp 424–427

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag London

About this chapter

Cite this chapter

Treiber, M.A. (2013). Continuous Optimization. In: Optimization for Computer Vision. Advances in Computer Vision and Pattern Recognition. Springer, London. https://doi.org/10.1007/978-1-4471-5283-5_2

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-5283-5_2

  • Published:

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-5282-8

  • Online ISBN: 978-1-4471-5283-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics