Skip to main content
Log in

Calibration by optimization without using derivatives

  • Published:
Optimization and Engineering Aims and scope Submit manuscript

Abstract

Applications in engineering frequently require the adjustment of certain parameters. While the mathematical laws that determine these parameters often are well understood, due to time limitations in every day industrial life, it is typically not feasible to derive an explicit computational procedure for adjusting the parameters based on some given measurement data. This paper aims at showing that in such situations, direct optimization offers a very simple approach that can be of great help. More precisely, we present a numerical implementation for the local minimization of a smooth function \(f:{\mathbb R}^n\rightarrow {\mathbb R}\) subject to upper and lower bounds without relying on the knowledge of the derivative of f. In contrast to other direct optimization approaches the algorithm assumes that the function evaluations are fairly cheap and that the rounding errors associated with the function evaluations are small. As an illustration, this algorithm is applied to approximate the solution of a calibration problem arising from an engineering application. The algorithm uses a Quasi-Newton trust region approach adjusting the trust region radius with a line search. The line search is based on a spline function which minimizes a weighted least squares sum of the jumps in its third derivative. The approximate gradients used in the Quasi-Newton approach are computed by central finite differences. A new randomized basis approach is considered to generate finite difference approximations of the gradient which also allow for a curvature correction of the Hessian in addition to the Quasi-Newton update. These concepts are combined with an active set strategy. The implementation is public domain; numerical experiments indicate that the algorithm is well suitable for the calibration problem of measuring instruments that prompted this research. Further preliminary numerical results suggest that an approximate local minimizer of a smooth non-convex function f depending on \(n\le 300 \) variables can be computed with a number of iterations that grows moderately with n.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. In short distance photogrammetry this approach has been used successfully for a long time. Here, the calibration parameters describe the properties of the camera which change after each change of the lens. With the aid of a large number of measured values the parameters of the outer and the inner orientation of the camera are approximated on high performance computers to determine the measured variables. Another application is the compensation of the geometric deviations of machine tools and coordinate measuring equipment.

  2. In Powell (1970) this update is defined as the limit when iterating the Broyden-rank-1-update followed by a symmetrization. Also included in Powell (1970) are numerical examples and convergence properties. In addition, this update minimizes the Frobenius-norm of the correction subject to the Quasi-Newton condition and the symmetry condition, see e.g. Jarre and Stoer (2004), Theorem 6.6.10 and 6.6.18. The minimum norm property motivates the choice of this update for the Euclidean norm trust region problem.

References

  • Anderson EJ, Ferris MC (2001) A direct search algorithm for optimization of expensive functions by surrogates. SIAM J Optim 11:837–857

    Article  MathSciNet  MATH  Google Scholar 

  • Audet C (2014) A survey on direct search methods for Blackbox optimization and their applications. In: Pardalos PM, Rassias TM (eds) Mathematics Without Boundaries. Springer, New York, pp 31–56

    Google Scholar 

  • Audet C, Dennis JE Jr (2006) Mesh adaptive direct search algorithms for constrained optimization. SIAM J Optim 17(1):188–217

    Article  MathSciNet  MATH  Google Scholar 

  • Audet C, Ianni A, Le Digabel S, Tribes C (2014) Reducing the number of function evaluations in mesh adaptive direct search algorithms. SIAM J Optim 24(2):621–642

    Article  MathSciNet  MATH  Google Scholar 

  • Bingham D (2005) Virtual Library of Simulation Experiments: Test Functions and Datasets, Optimization Test Problems. http://www.sfu.ca/\(\tilde{\ \ }\)ssurjano/optimization.html

  • BIPB - JCGM 200:2012 (2012) International Vocabulary of Metrology—Basic and General Concepts and Associated Terms (VIM), vol. 28, 3rd edn. Springer, Berlin, Heidelberg

  • Booker AJ, Dennis JE Jr, Frank PD, Serafini DB, Torczon V, Trosset MW (1999) A rigorous framework for optimization of expensive functions by surrogates. Struct Optim 17:1–13

    Article  Google Scholar 

  • Botsaris CA, Jacobson DH (1976) A Newton-type curvilinear search method for optimization. J Math Anal Appl 54(1):217–229

    Article  MathSciNet  MATH  Google Scholar 

  • Conn AR, Scheinberg K, Toint PL (1997a) On the convergence of derivative-free methods for unconstrained optimization. In: Powell MJD, Buhmann MD, Iserles A (eds) Approximation Theory and Optimization. Cambridge University Press, Cambridge, pp 83–108

    Google Scholar 

  • Conn AR, Scheinberg K, Toint PL (1997b) Recent progress in unconstrained nonlinear optimization without derivatives. Math Program 79:397–414

    MathSciNet  MATH  Google Scholar 

  • Conn AR, Scheinberg K, Vicente LN (2009a) Global convergence of general derivative-free trust-region algorithms to first- and second-order critical points. SIAM J Optim 20:387–415

    Article  MathSciNet  MATH  Google Scholar 

  • Conn AR, Scheinberg K, Vicente LN (2009b) Introduction to Derivative-Free Optimization, MPS-SIAM Series on Optimization. SIAM, Philadelphia

    Book  MATH  Google Scholar 

  • Csendes T (1988) Nonlinear parameter estimation by global optimization—efficiency and reliability. Acta Cybern 8:361–370

    MathSciNet  MATH  Google Scholar 

  • Csendes T, Pal L, Oscar J, Sendin H, Banga JR (2008) The GLOBAL Optimization Method Revisited, Report. Institute of Informatics, University of Szeged, Hungary

    MATH  Google Scholar 

  • Custodio AL, Vicente LN (2007) Using sampling and simplex derivatives in pattern search methods. SIAM J Optim 18:537–555

    Article  MathSciNet  MATH  Google Scholar 

  • Custodio AL, Rocha H, Vicente LN (2010) Incorporating minimum Frobenius norm models in direct search. Comput Optim Appl 46:265–278

    Article  MathSciNet  MATH  Google Scholar 

  • Dennis JE Jr, Echebest N, Guardarucci MT, Martinez JM, Scolnik HD, Vacchino C (1991) A curvilinear search using tridiagonal secant updates for unconstrained optimization. SIAM J Optim 1(3):333–357

    Article  MathSciNet  MATH  Google Scholar 

  • Elster C, Neumaier A (1995) A grid algorithm for bound constrained optimization of noisy functions. IMA J Numer Anal 15:585–608

    Article  MathSciNet  MATH  Google Scholar 

  • Goldstein H, Poole C, Safko J (2002) Classical Mechanics, 3rd edn. Addison-Wesley, San Fransisco, pp 150–154

    MATH  Google Scholar 

  • Golub GH, Van Loan CF (1993) Matrix Computations, 2nd edn. The Johns Hopkins University Press, Baltimore/London

    MATH  Google Scholar 

  • Hansen N (2006) The CMA Evolution Strategy: A Comparing Review. In: Lozano JA, Larraga P, Inza I, Bengoetxea E (eds) Towards a New Evolutionary Computation. Advances in Estimation of Distribution Algorithms. Springer, Heidelberg, pp 75–102

    Chapter  Google Scholar 

  • Hansen N, Niederberger ASP, Guzzella L, Koumoutsakos P (2009) A method for handling uncertainty in evolutionary optimization with an application to feedback control of combustion. IEEE Trans Evol Comput 13(1):180–197

    Article  Google Scholar 

  • Huyer W, Neumaier A (2008) Snobfit—stable noisy optimization by branch and fit. ACM Trans Math Softw 35:25 (Article 9)

    Article  MathSciNet  Google Scholar 

  • Jarre F (2015) MWD, Smooth Minimization Without using Derivatives, a Matlab Collection. http://www.opt.uni-duesseldorf.de/en/forschung-fs.html

  • Jarre F, Stoer J (2004) Optimierung. Springer, Berlin/Heidelberg/New York

    Book  Google Scholar 

  • Kiefer J (1953) Sequential minimax search for a maximum. Proc Am Math Soc 4(3):502–506

    Article  MathSciNet  MATH  Google Scholar 

  • Lewis RM, Torczon V, Trosset MW (2000) Direct search methods: then and now. J Comput Appl Math 124(1–2):191–207

    Article  MathSciNet  MATH  Google Scholar 

  • Li RC (2008) On Meinardus’ examples for the conjugate gradient method. Math Comput 77(261):335–352

    Article  MathSciNet  MATH  Google Scholar 

  • Powell MDJ (1970) A new algorithm for unconstrained optimization. In: Rosen JB, Mangasarian OL, Ritter K (eds) Nonlinear Programming. Academic Press, New York, pp 31–65

    Chapter  Google Scholar 

  • Powell MDJ (1998) Direct search algorithms for optimization calculations. Acta Numer 7:287–336

    Article  MathSciNet  MATH  Google Scholar 

  • Rios LM, Sahinidis NV (2013) Derivative-free optimization: a review of algorithms and comparison of software implementations. J Glob Optim 56:1247–1293

    Article  MathSciNet  MATH  Google Scholar 

  • Schmidt M (2012) minFunc: Unconstrained Differentiable Multivariate Optimization in Matlab. http://www.cs.ubc.ca/ schmidtm/Software/minFunc.html

  • Stoer J, Bulirsch R (2002) “Introduction to Numerical Analysis”, Third Edition, Texts in Applied Mathematics. Springer, Berlin

    MATH  Google Scholar 

  • Vaz AIF, Vicente LN (2007) A particle swarm pattern search method for bound constrained global optimization. J Glob Optim 39:197–219

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

The authors would like to thank Andrew Conn, Roland Freund, and Arnold Neumaier for helpful criticism and an unknown referee for comments that helped to improve this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Markus Lazar.

Additional information

Markus Lazar and Florian Jarre with financial support of i-for-T GmbH, Germany.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lazar, M., Jarre, F. Calibration by optimization without using derivatives. Optim Eng 17, 833–860 (2016). https://doi.org/10.1007/s11081-016-9324-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11081-016-9324-3

Keywords

Navigation