Building Poker Agent Using Reinforcement Learning with Neural Networks

Annija Rupeneite


Poker is a game with incomplete and imperfect information. The ability to estimate opponent and interpret its actions makes a player as a world class player. Finding optimal game strategy is not enough to win poker game. As in real life as in online poker game, the most time of it consists of opponent analysis. This paper illustrates a development of poker agent using reinforcement learning with neural networks.


  1. Coulom M.R., 2002. Reinforcement Learning Using Neural Networks, with Applications to Motor Control. PhD thesis, Institut National Polytechnique de Grenoble.
  2. Davidson A., 1999. Using Artifical Neural Networks to Model Opponents in Texas Hold'em. University of Alberta
  3. Davidson A., Billings D., Jonathan Schaeer, Duane Szafron, 2002. Improved Opponent Modeling in Poker. Artificial Intelligence - Chips challenging champions: games, computers and Artificial Intelligence Volume 134 Issue 1-2.
  4. Felix D., Reis L.P., 2008. An Experimental Approach to Online Opponent, Modeling in Texas Hold'em Poker. Advances in Artificial Intelligence -SBIA 2008.
  5. Félix D. and Reis L.P., 2008. Opponent Modelling in Texas Hold'em Poker asthe Key for Success. ECAI 2008.
  6. Haykin S.S., 1999. Neural networks : a comprehensive foundation. Upper Saddle River, NJ : Prentice Hall.
  7. Hilger M., 2003, Internet Texas Hold'em: Winning Strategies from an Internet Pro. Dimat Enterprises, Inc.
  8. Johanson M., 2007. Robust strategies and counterstrategies: Building a champion level computer poker player. In Masters Abstracts International, volume 46.
  9. Li A., 2013. Enhancing Poker Agents with Hand History Statistics. Bachelor-Thesis, Technische Universitat Darmstadt.
  10. Murphy K.P., 1998. A brief introduction to reinforcement learning. University of British Columbia.
  11. Patel J.R. and Barve S.S., 2014. Reinforcement Learning: Features and its applications, International Journal of Computer Technology and Applications, volume 5 Issue 3.
  12. Poole D. and Mackworth A., 2010. Artificial Intelligence: Foundations of Computational Agents. Cambridge University Press.
  13. Sandberg I.W., Lo J.T., Fancourt C.L., Principe J.C., Katagiri S., Hayk S., 2001. Nonlinear Dynamical Systems: Feedforward Neural Network Perspectives. John Wiley & Sons,
  14. Sklansky D, Malmuth M., 1999. Hold'em Poker For Advanced Players.Two Plus Two Pub.
  15. Sklansky D., 2004. The Theory of Poker, Two Plus Two Publishing.
  16. Stergiou C. and Siganos D., 1995. Neural Networks, Surprise 96 Volume 4 (Final Reports).
  17. Sutton R.S. and Barto A.G., 1998. Reinforcement Learning: An Introduction. The MIT Press.
  18. Szepesvari C., 2010. Algorithms for Reinforcement Learning. Morgan and Claypool Publishers.
  19. Sweeney N., Sinclair D., 2012. Applying Reinforcement Learning to Poker.Compter Poker Symposium.
  20. Teófilo L.F., Reis L.P., Cardoso H.L., Félix D., Sêca R., Ferreira J., Mendes P., Cruz N., Pereira V., Passos N., 2012. Computer Poker Research at LIACC. 2012 Computer Poker Symposium at AAAI.
  21. Tesauro G., 1995. Temporal Difference Learning and TDGammon. Communications of the ACM, March 1995 / Vol. 38, No. 3.

Paper Citation

in Harvard Style

Rupeneite A. (2014). Building Poker Agent Using Reinforcement Learning with Neural Networks . In Doctoral Consortium - DCINCO, (ICINCO 2014) ISBN , pages 22-29

in Bibtex Style

author={Annija Rupeneite},
title={Building Poker Agent Using Reinforcement Learning with Neural Networks },
booktitle={Doctoral Consortium - DCINCO, (ICINCO 2014)},

in EndNote Style

JO - Doctoral Consortium - DCINCO, (ICINCO 2014)
TI - Building Poker Agent Using Reinforcement Learning with Neural Networks
SN -
AU - Rupeneite A.
PY - 2014
SP - 22
EP - 29
DO -