Abstract
Noisy optimization is the optimization of objective functions corrupted by noise. A portfolio of algorithms is a set of algorithms equipped with an algorithm selection tool for distributing the computational power among them. We study portfolios of noisy optimization solvers, show that different settings lead to different performances, obtain mathematically proved performance (in the sense that the portfolio performs nearly as well as the best of its’ algorithms) by an ad hoc selection algorithm dedicated to noisy optimization. A somehow surprising result is that it is better to compare solvers with some lag; i.e., recommend the current recommendation of the best solver, selected from a comparison based on their recommendations earlier in the run.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Aha, D.W.: Generalizing from case studies: a case study. In: Proceedings of the 9th International Workshop on Machine Learning, pp. 1–10. Morgan Kaufmann Publishers Inc. (1992)
Armstrong, W., Christen, P., McCreath, E., Rendell, A.P.: Dynamic algorithm selection using reinforcement learning. In: International Workshop on Integrating AI and Data Mining, pp. 18–25 (2006)
Arnold, D.V., Beyer, H.-G.: A general noise model and its effects on evolution strategy performance. IEEE Trans. Evol. Comput. 10(4), 380–391 (2006)
Auer, P.: Using confidence bounds for exploitation-exploration trade-offs. J. Mach. Learn. Res. 3, 397–422 (2003)
Auer, P., Cesa-Bianchi, N., Freund, Y., Schapire, R.E.: Gambling in a rigged casino: the adversarial multi-armed bandit problem. In: Proceedings of the 36th Annual Symposium on Foundations of Computer Science, pp. 322–331. IEEE Computer Society Press, Los Alamitos (1995)
Beyer, H.-G.: The Theory of Evolutions Strategies. Springer, Heidelberg (2001)
Borrett, J., Tsang, E.P.K.: Towards a formal framework for comparing constraint satisfaction problem formulations. Technical report, University of Essex, Department of Computer Science (1996)
Bubeck, S., Munos, R., Stoltz, G.: Pure exploration in multi-armed bandits problems. In: Gavaldà , R., Lugosi, G., Zeugmann, T., Zilles, S. (eds.) ALT 2009. LNCS, vol. 5809, pp. 23–37. Springer, Heidelberg (2009)
Chen, H.: Lower rate of convergence for locating the maximum of a function. Ann. Stat. 16, 1330–1334 (1988)
Cicirello, V.A., Smith, S.F.: The max k-armed bandit: a new model of exploration applied to search heuristic selection. In: Proceedings of the 20th National Conference on Artificial Intelligence, pp. 1355–1361. AAAI Press (2005)
Conn, A., Scheinberg, K., Toint, P.: Recent progress in unconstrained nonlinear optimization without derivatives. Math. Program. 79(1–3), 397–414 (1997)
Fabian, V.: Stochastic approximation of minima with improved asymptotic speed. Ann. Math. Stat. 38, 191–200 (1967)
Fabian, V.: Stochastic Approximation. SLP. Department of Statistics and Probability, Michigan State University (1971)
Gagliolo, M., Schmidhuber, J.: A neural network model for inter-problem adaptive online time allocation. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds.) ICANN 2005. LNCS, vol. 3697, pp. 7–12. Springer, Heidelberg (2005)
Gagliolo, M., Schmidhuber, J.: Learning dynamic algorithm portfolios. Ann. Math. Artif. Intell. 47, 295–328 (2006)
Grigoriadis, M.D., Khachiyan, L.G.: A sublinear-time randomized approximation algorithm for matrix games. Oper. Res. Lett. 18(2), 53–58 (1995)
Hamadi, Y.: Search: from algorithms to systems. Ph.D. thesis, Université Paris-Sud (2013)
Horvitz, E., Ruan, Y., Gomes, C.P., Kautz, H.A., Selman, B., Chickering, D.M.: A bayesian approach to tackling hard computational problems. In: Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence, pp. 235–244. Morgan Kaufmann Publishers Inc. (2001)
Jebalia, M., Auger, A., Hansen, N.: Log-linear convergence and divergence of the scale-invariant (1+1)-es in noisy environments. Algorithmica, pp. 1–36. Springer, New York (2010)
Jin, Y., Branke, J.: Evolutionary optimization in uncertain environments. A survey. IEEE Trans. Evol. Comput. 9(3), 303–317 (2005)
Kadioglu, S., Malitsky, Y., Sabharwal, A., Samulowitz, H., Sellmann, M.: Algorithm selection and scheduling. In: Lee, J. (ed.) CP 2011. LNCS, vol. 6876, pp. 454–469. Springer, Heidelberg (2011)
Kocsis, L., Szepesvari, C.: Discounted-UCB. In: 2nd Pascal-Challenge Workshop (2006)
Kotthoff, L.: Algorithm selection for combinatorial search problems: a survey. CoRR, abs/1210.7959 (2012)
Lai, T., Robbins, H.: Asymptotically efficient adaptive allocation rules. Adv. Appl. Math. 6, 4–22 (1985)
Nudelman, E., Leyton-Brown, K., H. Hoos, H., Devkar, A., Shoham, Y.: Understanding random SAT: beyond the clauses-to-variables ratio. In: Wallace, M. (ed.) CP 2004. LNCS, vol. 3258, pp. 438–452. Springer, Heidelberg (2004)
Pulina, L., Tacchella, A.: A self-adaptive multi-engine solver for quantified boolean formulas. Constraints 14(1), 80–116 (2009)
Samulowitz, H., Memisevic, R.: Learning to solve QBF. In: Proceedings of the 22nd National Conference on Artificial Intelligence, pp. 255–260. AAAI (2007)
Sendhoff, B., Beyer, H.-G., Olhofer, M.: The influence of stochastic quality functions on evolutionary search. In: Tan, K., Lim, M., Yao, X., Wang, L. (eds.) Recent Advances in Simulated Evolution and Learning. Advances in Natural Computation, pp. 152–172. World Scientific, New York (2004)
Shamir, O.: On the complexity of bandit and derivative-free stochastic convex optimization. CoRR, abs/1209.2388 (2012)
Streeter, M.J., Golovin, D., Smith, S.F.: Restart schedules for ensembles of problem instances. In: AAAI 2007, pp. 1204–1210. AAAI Press (2007)
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)
Utgoff, P.E.: Perceptron trees: a case study in hybrid concept representations. In: National Conference on Artificial Intelligence, pp. 601–606 (1988)
Vassilevska, V., Williams, R., Woo, S.L.M.: Confronting hardness using a hybrid approach. In: Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithm, pp. 1–10. ACM (2006)
Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997)
Xu, L., Hutter, F., Hoos, H.H., Leyton-Brown, K.: Hydra-mip: automated algorithm configuration and selection for mixed integer programming. In: RCRA Workshop on Experimental Evaluation of Algorithms for Solving Problems with Combinatorial Explosion at the International Joint Conference on Artificial Intelligence (IJCAI) (2011)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Cauwet, ML., Liu, J., Teytaud, O. (2014). Algorithm Portfolios for Noisy Optimization: Compare Solvers Early. In: Pardalos, P., Resende, M., Vogiatzis, C., Walteros, J. (eds) Learning and Intelligent Optimization. LION 2014. Lecture Notes in Computer Science(), vol 8426. Springer, Cham. https://doi.org/10.1007/978-3-319-09584-4_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-09584-4_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-09583-7
Online ISBN: 978-3-319-09584-4
eBook Packages: Computer ScienceComputer Science (R0)