Advertisement

Towards Gaussian Process-based Optimization with Finite Time Horizon

  • David Ginsbourger
  • Rodolphe Le Riche
Conference paper
Part of the Contributions to Statistics book series (CONTRIB.STAT.)

Abstract

During the last decade, Kriging-based sequential optimization algorithms have become standard methods in computer experiments. These algorithms rely on the iterative maximization of sampling criteria such as the Expected Improvement (EI), which takes advantage of Kriging conditional distributions to make an explicit trade-off between promising and uncertain points in the search space. We have recently worked on a multipoint EI criterion meant to choose simultaneously several points for synchronous parallel computation. The results presented in this article concern sequential procedures with a fixed number of iterations. We show that maximizing the usual EI at each iteration is suboptimal. In essence, the latter amounts to considering the current iteration as the last one. This work formulates the problem of optimal strategy for finite horizon sequential optimization, provides the solution to this problem in terms of a new multipoint EI, and illustrates the suboptimality of maximizing the 1-point EI at each iteration on the basis of a first counter-example.

Keywords

Sequential Strategy Finite Horizon Gaussian Process Model Approximate Dynamic Program Finite Time Horizon 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

Acknowledgements

This work was funded by the Optimisation Multi-Disciplinaire (OMD) project of the French Research Agency (ANR). The authors would like to thank Julien Bect (Ecole Supérieure d’Electricité) for providing them with the related results of Mockus (1988).

References

  1. Auger, A. and O. Teytaud (2010). Continuous lunches are free plus the design of optimal optimization algorithms. Algorithmica 57, 121–146.MATHCrossRefGoogle Scholar
  2. Bertsekas, D. (2007). Dynamic Programming and Optimal Control, Vol. 1. Athena Scientific, Belmont, MA.Google Scholar
  3. Ginsbourger, D., R. Le Riche, and L. Carraro (2010). Computational Intelligence in Expensive Optimization Problems, Chapter “Kriging is well-suited to parallelize optimization”. Studies in Evolutionary Learning and Optimization. Springer-Verlag.Google Scholar
  4. Jones, D., M. Schonlau, and W. Welch (1998). Efficient global optimization of expensive black-box functions. Journal of Global Optimization 13, 455–492.MATHCrossRefMathSciNetGoogle Scholar
  5. Mockus, J. (1988). Bayesian Approach to Global Optimization. Amsterdam: Kluwer.Google Scholar
  6. Powell, W. (2007). Approximate Dynamic Programming. Solving the Curses of Dimensionality. New York: Wiley.MATHCrossRefGoogle Scholar
  7. Rasmussen, C. and K. Williams (2006). Gaussian Processes for Machine Learning. Boston, MA: M.I.T. Press.MATHGoogle Scholar
  8. Schonlau, M. (1997). Computer Experiments and Global Optimization. Ph.D. thesis, University of Waterloo, Canada.Google Scholar
  9. Stein, M. (1999). Interpolation of Spatial Data, Some Theory for Kriging. Springer.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  1. 1.CHYNNeuchâtelSwitzerland
  2. 2.CROCUS, Ecole des MinesSaint-EtienneFrance

Personalised recommendations