Advertisement

Algorithms for Adversarial Bandit Problems with Multiple Plays

  • Taishi Uchiya
  • Atsuyoshi Nakamura
  • Mineichi Kudo
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6331)

Abstract

Adversarial bandit problems studied by Auer et al. [4] are multi-armed bandit problems in which no stochastic assumption is made on the nature of the process generating the rewards for actions. In this paper, we extend their theories to the case where k( ≥ 1) distinct actions are selected at each time step. As algorithms to solve our problem, we analyze an extension of Exp3 [4] and an application of a bandit online linear optimization algorithm [1] in addition to two existing algorithms (Exp3,ComBand [5] in terms of time and space efficiency and the regret for the best fixed action set. The extension of Exp3, called Exp3.M, performs best with respect to all the measures: it runs in O(K(logk + 1)) time and O(K) space, and suffers at most \(O(\sqrt{kTK\log(K/k)})\) regret, where K is the number of possible actions and T is the number of iterations. The upper bound of the regret we proved for Exp3.M is an extension of that proved by Auer et al. for Exp3.

Keywords

Multi-armed bandit problem adversarial bandit problem online learning 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Abernethy, J., Hazan, E., Rakhlin, A.: Competing in the dark: An efficient algorithm for bandit linear optimization. In: Proceedings of the 21st Annual Conference on Learning Theory, COLT 2008 (2008)Google Scholar
  2. 2.
    Agrawal, R., Hegde, M.V., Teneketzis, D.: Multi-armed bandits with multiple plays and switching cost. Stochastic and Stochastic Reports 29, 437–459 (1990)zbMATHMathSciNetGoogle Scholar
  3. 3.
    Anantharam, V., Varaiya, P., Walrand, J.: Asymptotically efficient allocation rules for the multiarmed bandit problem with multiple plays –part i: I.i.d. rewards. IEEE Transactions on Automatic Control 32, 968–976 (1986)CrossRefMathSciNetGoogle Scholar
  4. 4.
    Auer, P., Cesa-bianchi, N., Freund, Y., Schapire, R.E.: The nonstochastic multiarmed bandit problem. SIAM Journal on Computing 32, 48–77 (2002)zbMATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Cesa-bianchi, N., Lugosi, G.: Combinatorial bandits. In: Proceedings of the 22nd Annual Conference on Learning Theory, COLT 2009 (2009)Google Scholar
  6. 6.
    Gandhi, R., Khuller, S., Parthasarathy, S., Srinivasan, A.: Dependent rounding and its applications to approximation algorithms. Journal of the ACM 53(3), 320–360 (2006)CrossRefMathSciNetGoogle Scholar
  7. 7.
    György, A., Linder, T., Lugosi, G., Ottucsák, G.: The on-line shortest path problem under partial monitoring. Journal of Machine Learning Research 8, 2369–2403 (2007)Google Scholar
  8. 8.
    Kleinberg, R.: Notes from week 8: Multi-armed bandit problems. CS 683–Learning, Games, and Electronic Markets (2007), http://www.cs.cornell.edu/courses/cs683/2007sp/lecnotes/week8.pdf
  9. 9.
    Krein, M., Milman, D.: On extreme points of regular convex sets. Studia Mathematica, 133–138 (1940)Google Scholar
  10. 10.
    Mahajan, A., Teneketzis, D.: Multi-armed bandit problems. In: Foundations and Applications of Sensor Management, pp. 121–151. Springer, Heidelberg (2007)Google Scholar
  11. 11.
    Nakamura, A., Abe, N.: Improvements to the linear programming based scheduling of web advertisements. Electronic Commerce Research 5, 75–98 (2005)zbMATHCrossRefGoogle Scholar
  12. 12.
    Niculescu-Mizil, A.: Multi-armed bandits with betting. In: COLT 2009 Workshop, pp. 133–138 (2009)Google Scholar
  13. 13.
    Pandelis, D.G., Tenekezis, D.: On the optimality of the gittins index rule in multi-armed bandits with multiple plays. Mathematical Methods of Operations Research 50, 449–461 (1999)zbMATHCrossRefMathSciNetGoogle Scholar
  14. 14.
    Song, N.O., Teneketzis, D.: Discrete search with multiple sensors. Mathematical Methods of Operations Research 60, 1–14 (2004)zbMATHMathSciNetGoogle Scholar
  15. 15.
    Uchiya, T., Nakamura, A., Kudo, M.: Adversarial bandit problems with multiple plays. In: The IEICE Technical Report, COMP2009-27 (2009)Google Scholar
  16. 16.
    Warmuth, M.K., Takimoto, E.: Path kernels and multiplicative updates. Journal of Machine Learning Research, 773–818 (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Taishi Uchiya
    • 1
  • Atsuyoshi Nakamura
    • 1
  • Mineichi Kudo
    • 1
  1. 1.Graduate School of Information Science and TechnologyHokkaido UniversitySapporoJapan

Personalised recommendations