Advertisement

Dynamic Control of Explore/Exploit Trade-Off in Bayesian Optimization

  • Dipti Jasrasaria
  • Edward O. Pyzer-KnappEmail author
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 858)

Abstract

Bayesian optimization offers the possibility of optimizing black-box operations not accessible through traditional techniques. The success of Bayesian optimization methods, such as Expected Improvement (EI) are significantly affected by the degree of trade-off between exploration and exploitation. Too much exploration can lead to inefficient optimization protocols, whilst too much exploitation leaves the protocol open to strong initial biases, and a high chance of getting stuck in a local minimum. Typically, a constant margin is used to control this trade-off, which results in yet another hyper-parameter to be optimized. We propose contextual improvement as a simple, yet effective heuristic to counter this - achieving a one-shot optimization strategy. Our proposed heuristic can be swiftly calculated and improves both the speed and robustness of discovery of optimal solutions. We demonstrate its effectiveness on both synthetic and real world problems and explore the unaccounted for uncertainty in the pre-determination of search hyperparameters controlling explore-exploit trade-off.

Keywords

Bayesian optimization Artificial intelligence Hyperparameter tuning 

Notes

Acknowledgements

The authors thank Dr Kirk Jordan for helpful discussions.

References

  1. 1.
    Snoek, J., Larochelle, H., Adams, R.P.: Practical bayesian optimization of machine learning algorithms. arXiv:1206.2944 [cs, stat], June 2012
  2. 2.
    Bergstra, J., Komer, B., Eliasmith, C., Yamins, D., Cox, D.D.: Hyperopt: a Python library for model selection and hyperparameter optimization. Comput. Sci. Disc. 8(1), 014008 (2015)CrossRefGoogle Scholar
  3. 3.
    Lisicki, M., Lubitz, W., Taylor, G.W.: Optimal design and operation of Archimedes screw turbines using Bayesian optimization. Appl. Energy 183, 1404–1417 (2016)CrossRefGoogle Scholar
  4. 4.
    Brochu, E., Cora, V.M., de Freitas, N.: A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv:1012.2599 [cs], December 2010
  5. 5.
    Mokus, J.: On Bayesian methods for seeking the extremum. In: Optimization Techniques IFIP Technical Conference Novosibirsk, July 17, 1974, pp. 400–404, Springer, Heidelberg, July 1974Google Scholar
  6. 6.
    Shahriari, B., Swersky, K., Wang, Z., Adams, R.P., de Freitas, N.: Taking the human out of the loop: a review of Bayesian optimization. Proc. IEEE 104, 148–175 (2016)CrossRefGoogle Scholar
  7. 7.
    Mockus, J.: The Bayesian approach to global optimization. In: System Modeling and Optimization, Lecture Notes in Control and Information Sciences, pp. 473–481, Springer, Heidelberg (1982)Google Scholar
  8. 8.
    Rasmussen, C., Williams, C.: Gaussian Processes for Machine Learning. MIT Press (2006)Google Scholar
  9. 9.
    Snoek, J., Rippel, O., Swersky, K., Kiros, Satish, N., Sundaram, N., Patwary, M., Ali, M., Adams, R.P, et al.: Scalable Bayesian Optimization Using Deep Neural Networks, arXiv preprint arXiv:1502.05700 (2015)
  10. 10.
    Kushner, H.J.: A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise. J. Basic Eng. 86, 97–106 (1964)CrossRefGoogle Scholar
  11. 11.
    Vazquez, E., Bect, J.: Convergence properties of the expected improvement algorithm with fixed mean and covariance functions. J. Stat. Plan. Infer. 140, 3088–3095 (2010)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Jones, D.R.: A taxonomy of global optimization methods based on response surfaces. J. Global Optim. 21, 345–383 (2001)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Lizotte, D.J.: Practical Bayesian Optimization. University of Alberta (2008)Google Scholar
  14. 14.
    Direct global optimization algorithmDirect Global Optimization Algorithm. SpringerGoogle Scholar
  15. 15.
    Neal, R.M.: Slice sampling. Ann. Stat. 31(3), 705–741 (2003)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Murray, I., Adams, R.P.: Slice sampling covariance hyperparameters of latent Gaussian models. In: Lafferty, J.D., Williams, C.K.I., Shawe-Taylor, J., Zemel, R.S., Culotta, A. (eds.) Advances in Neural Information Processing Systems, vol. 23, pp. 1732–1740. Curran Associates, Inc. (2010)Google Scholar
  17. 17.
    de G. Matthews, G., van der Wilk, M., Nickson, T., Fujii, K., Boukouvalas, A., León-Villagrá, P., Ghahramani, Z., Hensman, J.: GPFlow: a gaussian process library using TensorFlow, arXiv preprint arXiv:1610.08733, October 2016
  18. 18.
    Nash, W.J.: T.M.R. Laboratories, The Population biology of abalone (Haliotis species) in Tasmania. 1, Blacklip abalone (H. rubra) from the north coast and the islands of Bass Strait (1994)Google Scholar
  19. 19.
    Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, E.: Scikit-learn: machine learning in python. J. Mach. Learn. Res. 12, 2825–2830 (2011)MathSciNetzbMATHGoogle Scholar
  20. 20.
    LeCun, Y., Cortes, C.: MNIST handwritten digit database (2010)Google Scholar
  21. 21.
    Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014)MathSciNetzbMATHGoogle Scholar
  22. 22.
    Chollet, F., et al.: Keras (2015)Google Scholar
  23. 23.
    Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Man, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Vigas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., Zheng, X.: TensorFlow: large-scale machine learning on heterogeneous systems (2015). www.tensorflow.org
  24. 24.
  25. 25.
    Gelbart, M.A., Snoek, J., Adams, R.P.: Bayesian optimization with unknown constraints, arXiv:1403.5607 [cs, stat], March 2014

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.IBM Research, Hartree CentreSci-Tech DaresburyWarringtonUK

Personalised recommendations