Optimization of Expensive Functions by Surrogates Created from Neural Network Ensembles
The goal of this paper is to model hypothesis testing. A “real situation” is given in the form of a response surface, which is defined by a derivative-free continuous expensive objective function. An ideal hypothesis should correspond to a global minimum of this function. Thus, hypothesis testing is converted into optimization of a response surface. First, an objective function is evaluated at a few points. Then, the hypothetical (surrogate) surface landscape is created from an ensemble of approximations of the objective function. Approximations result from neural networks, which use already evaluated samples as the training set. The hypothesis landscape adapted by a merit function estimates a possibility of getting at a given point a better value, than is the currently achieved value from already evaluated points. The most promising point (a minimum of the adapted function) is used as the next sample point for the true expensive objective function. Its value is then used to adapt neural networks, creating a new hypothesis landscape. The results suggest that (1) in order to get a global minimum, it may be useful to have an estimation of the whole response surface, and therefore to explore also those points, where maxima are predicted, and (2) an assembly of modules predicting the next sample point from the same set of sample points can be more advantageous than a single neural network predictor.
KeywordsNeural Network Response Surface Merit Function Search Point Neural Network Ensemble
Unable to display preview. Download preview PDF.
- Fodor, J.A.: The Modularity of Mind. Cambridge, MA: MIT Press 1983.Google Scholar
- Opitz, D. Shavlik, J.: A Genetic Algorithm Approach for Creating Neural Network Ensembles. In: Sharkey A. (ed.): Combining Artificial Neural Nets, pp. 79–97. London: Springer 1999.Google Scholar
- Krogh, A. Vedelsby J.: Neural Network Ensembles, Cross Validation and Active Learning. In: Touretzky, D.S. Tesauro, G. Leen, T.K., (eds.): Adv. Neural Inf. Process. Syst., 7,231–238 (199Google Scholar
- Perrone, M.: Improving regression estimation: averag ing methods for variance reduction with extensions to general convex measure optimization. PhD thesis, Brown Univ., Phys. Dept. 1993.Google Scholar
- Alpaydin, E.: Techniques for Combining Multiple Learners. In: Alpaydin, E. (ed.): Proc. Eng. Intell. Syst. ’98, vol. 2, pp. 6–12. ICSC Press 1998.Google Scholar
- Jin, Y, Olhofer, M., Sendhoff, B.: On Evolutionary Optimization with Approximate Fitness Function. In: Whitley, D., Goldberg, D., Cantú-Paz, E., Spector, L., Parmee, I., Beyer, H.-G. (eds.): GECCO-2000 Proc., pp. 786–793. M. Kaufmann Pub. 2000.Google Scholar
- Jones, D. Schonlau, M. Welch, W.: Efficient Global Optimization of Expensive Black-Box Functions. J. Global Optim., 14, 455–492 (1998).Google Scholar
- Booker, A.J. Dennis, J.E. Jr. Frank, P.D. Serafini, D.B. Torczon, V. Trosset, M.W.: A rigorous framework for optimization of expensive functions by surrogates. Struct. Optim., 17, 1–13 (1999).Google Scholar
- Torczon, V. Trosset M.W.: Using approximations to accelerate engineering design optimization. In: Proc. 7th Symp. Multidisc. Anal. Optim., St. Louis, MO, Sept. 2–4, 1998.Google Scholar