Abstract
During the last decade, Kriging-based sequential optimization algorithms have become standard methods in computer experiments. These algorithms rely on the iterative maximization of sampling criteria such as the Expected Improvement (EI), which takes advantage of Kriging conditional distributions to make an explicit trade-off between promising and uncertain points in the search space. We have recently worked on a multipoint EI criterion meant to choose simultaneously several points for synchronous parallel computation. The results presented in this article concern sequential procedures with a fixed number of iterations. We show that maximizing the usual EI at each iteration is suboptimal. In essence, the latter amounts to considering the current iteration as the last one. This work formulates the problem of optimal strategy for finite horizon sequential optimization, provides the solution to this problem in terms of a new multipoint EI, and illustrates the suboptimality of maximizing the 1-point EI at each iteration on the basis of a first counter-example.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Auger, A. and O. Teytaud (2010). Continuous lunches are free plus the design of optimal optimization algorithms. Algorithmica 57, 121–146.
Bertsekas, D. (2007). Dynamic Programming and Optimal Control, Vol. 1. Athena Scientific, Belmont, MA.
Ginsbourger, D., R. Le Riche, and L. Carraro (2010). Computational Intelligence in Expensive Optimization Problems, Chapter “Kriging is well-suited to parallelize optimization”. Studies in Evolutionary Learning and Optimization. Springer-Verlag.
Jones, D., M. Schonlau, and W. Welch (1998). Efficient global optimization of expensive black-box functions. Journal of Global Optimization 13, 455–492.
Mockus, J. (1988). Bayesian Approach to Global Optimization. Amsterdam: Kluwer.
Powell, W. (2007). Approximate Dynamic Programming. Solving the Curses of Dimensionality. New York: Wiley.
Rasmussen, C. and K. Williams (2006). Gaussian Processes for Machine Learning. Boston, MA: M.I.T. Press.
Schonlau, M. (1997). Computer Experiments and Global Optimization. Ph.D. thesis, University of Waterloo, Canada.
Stein, M. (1999). Interpolation of Spatial Data, Some Theory for Kriging. Springer.
Acknowledgements
This work was funded by the Optimisation Multi-Disciplinaire (OMD) project of the French Research Agency (ANR). The authors would like to thank Julien Bect (Ecole Supérieure d’Electricité) for providing them with the related results of Mockus (1988).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ginsbourger, D., Le Riche, R. (2010). Towards Gaussian Process-based Optimization with Finite Time Horizon. In: Giovagnoli, A., Atkinson, A., Torsney, B., May, C. (eds) mODa 9 – Advances in Model-Oriented Design and Analysis. Contributions to Statistics. Physica-Verlag HD. https://doi.org/10.1007/978-3-7908-2410-0_12
Download citation
DOI: https://doi.org/10.1007/978-3-7908-2410-0_12
Published:
Publisher Name: Physica-Verlag HD
Print ISBN: 978-3-7908-2409-4
Online ISBN: 978-3-7908-2410-0
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)