Skip to main content

Optimization

  • Chapter
  • First Online:
  • 3545 Accesses

Part of the book series: Simulation Foundations, Methods and Applications ((SFMA))

Abstract

The goal of a simulation project is often formulated in terms of an optimization task, and this chapter explores this topic within the CTDS context. A key facet of this task is the identification of a criterion function that measures some aspect of the SUI’s behaviour that is related to the project goal(s). The criterion function is dependent on some set of parameters embedded within the SUI. The optimization task corresponds to finding a ‘best value’ for this set of parameters as indicated by an extreme value (either maximum or minimum) for the selected criterion function. This problem of extremizing the value of a criterion function by locating a best value for a set of parameters has been widely studied in the optimization literature. In the modelling and simulation context, the problem is distinctive inasmuch as the evaluation of the criterion function is linked to a simulation model. Several well-established numerical procedures that can be directly applied when the simulation model falls in the CTDS category are outlined in this chapter. Included here are both a gradient-independent method (the Nelder–Mead Simplex method) and a gradient-dependent method (the conjugate gradient method). Associated issues of gradient evaluation and the linear search problem are discussed.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Figure 10.1 has been taken from Pinter [23] with the permission of the author.

  2. 2.

    Within the present context, this implies that while α is in Î, J(α) always increases as α moves to the right from α *, and likewise, J(α) always increases as α moves to the left from α *.

References

  1. Al-Baali M (1985) Descent property and global convergence of the Fletcher-Reeves method with inexact line search. IMA J Numer Anal 5:121–124

    Article  MathSciNet  MATH  Google Scholar 

  2. Beale EML (1972) A derivation of conjugate gradients. In: Lottsma FA (ed) Numerical methods for non-linear optimization. Academic, London, pp 39–43

    Google Scholar 

  3. Bertsekas DP (1996) Constrained optimization and Lagrange multiplier methods. Athena Scientific, Nashua

    Google Scholar 

  4. Bhatnager S, Kowshik HJ (2005) A discrete parameter stochastic approximation algorithm for simulation optimization. Simulation 81(11):757–772

    Article  Google Scholar 

  5. Bonnans JF, Gilbert JC, Lemaréchal C, Sagastizabal CA (2003) Numerical optimization: theoretical and practical aspects. Springer, Berlin

    Book  Google Scholar 

  6. Buchholz P (2009) Optimization of stochastic discrete event models and algorithms for optimization logistics, Dagstuhl seminar proceedings 09261, 2009, Dagstuhl, Germany

    Google Scholar 

  7. Cormen TH, Leisserson CE, Rivest RL (1990) Introduction to algorithms. MIT Press, Cambridge, MA

    MATH  Google Scholar 

  8. Deroussi L, Gourgand M, Tchernev N (2006) 2006 international conference on service systems and service management, October, pp 495–500, Troyes, France

    Google Scholar 

  9. Fletcher R (1987) Practical methods of optimization, 2nd edn. Wiley, New York

    MATH  Google Scholar 

  10. Fletcher R, Reeves CM (1964) Function minimization by conjugate gradients. Comput J 7:149–154

    Article  MathSciNet  MATH  Google Scholar 

  11. Fu MC (2002) Optimization for simulation: theory versus practice. INFORMS J Comput 14:192–215

    Article  MathSciNet  MATH  Google Scholar 

  12. Gilbert J, Nocedal J (1992) Global convergence properties of conjugate gradient methods for optimization. SIAM J Optim 2:21–42

    Article  MathSciNet  MATH  Google Scholar 

  13. Heath MT (2000) Scientific computing, an introductory survey, 2nd edn. McGraw-Hill, New York

    Google Scholar 

  14. Lagarias JC, Reeds JA, Wright MH, Wright PE (1998) Convergence properties of the Nelder-Mead simplex method in low dimensions. SIAM J Optim 9:112–147

    Article  MathSciNet  MATH  Google Scholar 

  15. Law AM, Kelton DW (2000) Simulation modeling and analysis, 3rd edn. McGraw Hill, New York

    Google Scholar 

  16. Lewis FL, Syrmos VL (1995) Optimal control, 2nd edn. Wiley, New York

    Google Scholar 

  17. Nelder J, Mead R (1965) A simplex method for function minimization. Comput J 7:308–313

    Article  MATH  Google Scholar 

  18. Nocedal J, Wright SJ (1999) Numerical optimization. Springer, New York

    Book  MATH  Google Scholar 

  19. Olafason S, Kim J (2002) Simulation optimization. In: Proceeding of the 2002 winter simulation conference, pp 79–84, San Diego, CA

    Google Scholar 

  20. Oretega JM, Rheinboldt WC (1970) Iterative solution of nonlinear equations in several variables. Academic, New York

    Google Scholar 

  21. Pedregal P (2004) Introduction to optimization. Springer, New York

    MATH  Google Scholar 

  22. Pichitlamken J, Nelson BL (2003) A combined procedure for optimizing via simulation. ACM Trans Model Simul 13:155–179

    Article  Google Scholar 

  23. Pintér JD (2013) LGO – a model development and solver system for global-local nonlinear optimization, User’s guide, 2nd edn. Published and distributed by Pintér Consulting Services, Inc., Halifax. www.pinterconsulting.com (First edition: June 1995)

  24. Polack E, Ribière G (1969) Note sur la Convergence de Méthodes de Directions Conjuguées. Revue Française d’Informatique et de Recherche Opérationnelle 16:35–43

    Google Scholar 

  25. Powell MJD (1978) Restart procedures for the conjugate gradient method. Math Prog 12:241–254

    Article  Google Scholar 

  26. Press WH, Teukolsky SA, Vetterling WT, Flannery BP (1999) Numerical recipes in C, The art of scientific computing, 2nd edn. Cambridge University Press, Cambridge

    Google Scholar 

  27. Rubinstein R, Shapiro A (1993) Discrete event systems: sensitivity analysis and stochastic optimization by the score function method. Wiley, New York

    MATH  Google Scholar 

  28. Rykov A (1983) Simplex algorithms for unconstrained optimization. Probl Control Inf Theory 12:195–208

    MathSciNet  MATH  Google Scholar 

  29. Seierstad A, Sydstaeter K (1987) Optimal control theory with economic applications. North Holland, Amsterdam

    MATH  Google Scholar 

  30. Sorenson HW (1969) Comparison of some conjugate directions procedures for function minimization. J Franklin Inst 288:421–441

    Article  MathSciNet  MATH  Google Scholar 

  31. Wolfe P (1969) Convergence conditions for ascent methods. SIAM Rev 11:226–235

    Article  MathSciNet  MATH  Google Scholar 

  32. Zabinsky ZB (2003) Stochastic adaptive search for global optimization. Kluwer Academic Publishers, Dordrecht

    Book  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag London

About this chapter

Cite this chapter

Birta, L.G., Arbez, G. (2013). Optimization. In: Modelling and Simulation. Simulation Foundations, Methods and Applications. Springer, London. https://doi.org/10.1007/978-1-4471-2783-3_10

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-2783-3_10

  • Published:

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-2782-6

  • Online ISBN: 978-1-4471-2783-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics