Advertisement

Optimization and Engineering

, Volume 17, Issue 4, pp 833–860 | Cite as

Calibration by optimization without using derivatives

  • Markus Lazar
  • Florian Jarre
Article
  • 212 Downloads

Abstract

Applications in engineering frequently require the adjustment of certain parameters. While the mathematical laws that determine these parameters often are well understood, due to time limitations in every day industrial life, it is typically not feasible to derive an explicit computational procedure for adjusting the parameters based on some given measurement data. This paper aims at showing that in such situations, direct optimization offers a very simple approach that can be of great help. More precisely, we present a numerical implementation for the local minimization of a smooth function \(f:{\mathbb R}^n\rightarrow {\mathbb R}\) subject to upper and lower bounds without relying on the knowledge of the derivative of f. In contrast to other direct optimization approaches the algorithm assumes that the function evaluations are fairly cheap and that the rounding errors associated with the function evaluations are small. As an illustration, this algorithm is applied to approximate the solution of a calibration problem arising from an engineering application. The algorithm uses a Quasi-Newton trust region approach adjusting the trust region radius with a line search. The line search is based on a spline function which minimizes a weighted least squares sum of the jumps in its third derivative. The approximate gradients used in the Quasi-Newton approach are computed by central finite differences. A new randomized basis approach is considered to generate finite difference approximations of the gradient which also allow for a curvature correction of the Hessian in addition to the Quasi-Newton update. These concepts are combined with an active set strategy. The implementation is public domain; numerical experiments indicate that the algorithm is well suitable for the calibration problem of measuring instruments that prompted this research. Further preliminary numerical results suggest that an approximate local minimizer of a smooth non-convex function f depending on \(n\le 300 \) variables can be computed with a number of iterations that grows moderately with n.

Keywords

Calibration of measuring instruments Minimization without derivatives Direct search Quadratic model 

Notes

Acknowledgments

The authors would like to thank Andrew Conn, Roland Freund, and Arnold Neumaier for helpful criticism and an unknown referee for comments that helped to improve this paper.

References

  1. Anderson EJ, Ferris MC (2001) A direct search algorithm for optimization of expensive functions by surrogates. SIAM J Optim 11:837–857MathSciNetCrossRefMATHGoogle Scholar
  2. Audet C (2014) A survey on direct search methods for Blackbox optimization and their applications. In: Pardalos PM, Rassias TM (eds) Mathematics Without Boundaries. Springer, New York, pp 31–56Google Scholar
  3. Audet C, Dennis JE Jr (2006) Mesh adaptive direct search algorithms for constrained optimization. SIAM J Optim 17(1):188–217MathSciNetCrossRefMATHGoogle Scholar
  4. Audet C, Ianni A, Le Digabel S, Tribes C (2014) Reducing the number of function evaluations in mesh adaptive direct search algorithms. SIAM J Optim 24(2):621–642MathSciNetCrossRefMATHGoogle Scholar
  5. Bingham D (2005) Virtual Library of Simulation Experiments: Test Functions and Datasets, Optimization Test Problems. http://www.sfu.ca/\(\tilde{\ \ }\)ssurjano/optimization.htmlGoogle Scholar
  6. BIPB - JCGM 200:2012 (2012) International Vocabulary of Metrology—Basic and General Concepts and Associated Terms (VIM), vol. 28, 3rd edn. Springer, Berlin, HeidelbergGoogle Scholar
  7. Booker AJ, Dennis JE Jr, Frank PD, Serafini DB, Torczon V, Trosset MW (1999) A rigorous framework for optimization of expensive functions by surrogates. Struct Optim 17:1–13CrossRefGoogle Scholar
  8. Botsaris CA, Jacobson DH (1976) A Newton-type curvilinear search method for optimization. J Math Anal Appl 54(1):217–229MathSciNetCrossRefMATHGoogle Scholar
  9. Conn AR, Scheinberg K, Toint PL (1997a) On the convergence of derivative-free methods for unconstrained optimization. In: Powell MJD, Buhmann MD, Iserles A (eds) Approximation Theory and Optimization. Cambridge University Press, Cambridge, pp 83–108Google Scholar
  10. Conn AR, Scheinberg K, Toint PL (1997b) Recent progress in unconstrained nonlinear optimization without derivatives. Math Program 79:397–414MathSciNetMATHGoogle Scholar
  11. Conn AR, Scheinberg K, Vicente LN (2009a) Global convergence of general derivative-free trust-region algorithms to first- and second-order critical points. SIAM J Optim 20:387–415MathSciNetCrossRefMATHGoogle Scholar
  12. Conn AR, Scheinberg K, Vicente LN (2009b) Introduction to Derivative-Free Optimization, MPS-SIAM Series on Optimization. SIAM, PhiladelphiaCrossRefMATHGoogle Scholar
  13. Csendes T (1988) Nonlinear parameter estimation by global optimization—efficiency and reliability. Acta Cybern 8:361–370MathSciNetMATHGoogle Scholar
  14. Csendes T, Pal L, Oscar J, Sendin H, Banga JR (2008) The GLOBAL Optimization Method Revisited, Report. Institute of Informatics, University of Szeged, HungaryMATHGoogle Scholar
  15. Custodio AL, Vicente LN (2007) Using sampling and simplex derivatives in pattern search methods. SIAM J Optim 18:537–555MathSciNetCrossRefMATHGoogle Scholar
  16. Custodio AL, Rocha H, Vicente LN (2010) Incorporating minimum Frobenius norm models in direct search. Comput Optim Appl 46:265–278MathSciNetCrossRefMATHGoogle Scholar
  17. Dennis JE Jr, Echebest N, Guardarucci MT, Martinez JM, Scolnik HD, Vacchino C (1991) A curvilinear search using tridiagonal secant updates for unconstrained optimization. SIAM J Optim 1(3):333–357MathSciNetCrossRefMATHGoogle Scholar
  18. Elster C, Neumaier A (1995) A grid algorithm for bound constrained optimization of noisy functions. IMA J Numer Anal 15:585–608MathSciNetCrossRefMATHGoogle Scholar
  19. Goldstein H, Poole C, Safko J (2002) Classical Mechanics, 3rd edn. Addison-Wesley, San Fransisco, pp 150–154MATHGoogle Scholar
  20. Golub GH, Van Loan CF (1993) Matrix Computations, 2nd edn. The Johns Hopkins University Press, Baltimore/LondonMATHGoogle Scholar
  21. Hansen N (2006) The CMA Evolution Strategy: A Comparing Review. In: Lozano JA, Larraga P, Inza I, Bengoetxea E (eds) Towards a New Evolutionary Computation. Advances in Estimation of Distribution Algorithms. Springer, Heidelberg, pp 75–102CrossRefGoogle Scholar
  22. Hansen N, Niederberger ASP, Guzzella L, Koumoutsakos P (2009) A method for handling uncertainty in evolutionary optimization with an application to feedback control of combustion. IEEE Trans Evol Comput 13(1):180–197CrossRefGoogle Scholar
  23. Huyer W, Neumaier A (2008) Snobfit—stable noisy optimization by branch and fit. ACM Trans Math Softw 35:25 (Article 9)MathSciNetCrossRefGoogle Scholar
  24. Jarre F (2015) MWD, Smooth Minimization Without using Derivatives, a Matlab Collection. http://www.opt.uni-duesseldorf.de/en/forschung-fs.htmlGoogle Scholar
  25. Jarre F, Stoer J (2004) Optimierung. Springer, Berlin/Heidelberg/New YorkCrossRefGoogle Scholar
  26. Kiefer J (1953) Sequential minimax search for a maximum. Proc Am Math Soc 4(3):502–506MathSciNetCrossRefMATHGoogle Scholar
  27. Lewis RM, Torczon V, Trosset MW (2000) Direct search methods: then and now. J Comput Appl Math 124(1–2):191–207MathSciNetCrossRefMATHGoogle Scholar
  28. Li RC (2008) On Meinardus’ examples for the conjugate gradient method. Math Comput 77(261):335–352MathSciNetCrossRefMATHGoogle Scholar
  29. Powell MDJ (1970) A new algorithm for unconstrained optimization. In: Rosen JB, Mangasarian OL, Ritter K (eds) Nonlinear Programming. Academic Press, New York, pp 31–65CrossRefGoogle Scholar
  30. Powell MDJ (1998) Direct search algorithms for optimization calculations. Acta Numer 7:287–336MathSciNetCrossRefMATHGoogle Scholar
  31. Rios LM, Sahinidis NV (2013) Derivative-free optimization: a review of algorithms and comparison of software implementations. J Glob Optim 56:1247–1293MathSciNetCrossRefMATHGoogle Scholar
  32. Schmidt M (2012) minFunc: Unconstrained Differentiable Multivariate Optimization in Matlab. http://www.cs.ubc.ca/ schmidtm/Software/minFunc.htmlGoogle Scholar
  33. Stoer J, Bulirsch R (2002) “Introduction to Numerical Analysis”, Third Edition, Texts in Applied Mathematics. Springer, BerlinMATHGoogle Scholar
  34. Vaz AIF, Vicente LN (2007) A particle swarm pattern search method for bound constrained global optimization. J Glob Optim 39:197–219MathSciNetCrossRefMATHGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  1. 1.Fakultät für IngenieurwissenschaftenUniversity of Applied SciencesRosenheimGermany
  2. 2.Mathematisches InstitutUniversity of DüsseldorfDüsseldorfGermany

Personalised recommendations