LEARNING TO PLAY K-ARMED BANDIT PROBLEMS

Francis Maes, Louis Wehenkel, Damien Ernst

Abstract

We propose a learning approach to pre-compute K-armed bandit playing policies by exploiting prior information describing the class of problems targeted by the player. Our algorithm first samples a set of K-armed bandit problems from the given prior, and then chooses in a space of candidate policies one that gives the best average performances over these problems. The candidate policies use an index for ranking the arms and pick at each play the arm with the highest index; the index for each arm is computed in the form of a linear combination of features describing the history of plays (e.g., number of draws, average reward, variance of rewards and higher order moments), and an estimation of distribution algorithm is used to determine its optimal parameters in the form of feature weights. We carry out simulations in the case where the prior assumes a fixed number of Bernoulli arms, a fixed horizon, and uniformly distributed parameters of the Bernoulli arms. These simulations show that learned strategies perform very well with respect to several other strategies previously proposed in the literature (UCB1, UCB2, UCB-V, KL-UCB and en-GREEDY); they also highlight the robustness of these strategies with respect to wrong prior information.

References

  1. Agrawal, R. (1995). Sample mean based index policies with o(log n) regret for the multi-armed bandit problem. Advances in Applied Mathematics, 27:1054-1078.
  2. Audibert, J., Munos, R., and Szepesvari, C. (2007). Tuning bandit algorithms in stochastic environments. Algorithmic Learning Theory (ALT), pages 150-165.
  3. Audibert, J., Munos, R., and Szepesvari, C. (2008). Exploration-exploitation trade-off using variance estimates in multi-armed bandits. Theoretical Computer Science.
  4. Auer, P., Fischer, P., and Cesa-Bianchi, N. (2002). Finitetime analysis of the multi-armed bandit problem. Machine Learning, 47:235-256.
  5. Garivier, A. and Cappé, O. (2011). The KL-UCB algorithm for bounded stochastic bandits and beyond. CoRR, abs/1102.2490.
  6. Gonzalez, C., Lozano, J., and Larran˜aga, P. (2002). Estimation of Distribution Algorithms. A New Tool for Evolutionary Computation, pages 99-124. Kluwer Academic Publishers.
  7. Ishii, S., Yoshida, W., and Yoshimoto, J. (2002). Control of exploitation-exploration meta-parameter in reinforcement learning. Neural Networks, 15:665-687.
  8. Lai, T. and Robbins, H. (1985). Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6:4-22.
  9. Mersereau, A., Rusmevichientong, P., and Tsitsiklis, J. (2009). A structured multiarmed bandit problem and the greedy policy. IEEE Trans. Automatic Control, 54:2787-2802.
  10. Pelikan, M. and Mühlenbein, H. (1998). Marginal distributions in evolutionary algorithms. In Proceedings of the International Conference on Genetic Algorithms Mendel 7898, pages 90-95, Brno, Czech Republic.
  11. Robbins, H. (1952). Some aspects of the sequential design of experiments. Bulletin of The American Mathematical Society, 58:527-536.
  12. Rubenstein, R. and Kroese, D. (2004). The cross-entropy method : a unified approach to combinatorial optimization, Monte-Carlo simluation, and machine learning. Springer, New York.
  13. Sutton, R. and Barto, A. (1998). Reinforcement Learning: An Introduction. MIT Press.
Download


Paper Citation


in Harvard Style

Maes F., Wehenkel L. and Ernst D. (2012). LEARNING TO PLAY K-ARMED BANDIT PROBLEMS . In Proceedings of the 4th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART, ISBN 978-989-8425-95-9, pages 74-81. DOI: 10.5220/0003733500740081


in Bibtex Style

@conference{icaart12,
author={Francis Maes and Louis Wehenkel and Damien Ernst},
title={LEARNING TO PLAY K-ARMED BANDIT PROBLEMS},
booktitle={Proceedings of the 4th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART,},
year={2012},
pages={74-81},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0003733500740081},
isbn={978-989-8425-95-9},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 4th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART,
TI - LEARNING TO PLAY K-ARMED BANDIT PROBLEMS
SN - 978-989-8425-95-9
AU - Maes F.
AU - Wehenkel L.
AU - Ernst D.
PY - 2012
SP - 74
EP - 81
DO - 10.5220/0003733500740081