Garrett Nicolai, Robert Hilderman


Computers have difficulty learning how to play Texas Hold'em Poker. The game contains a high degree of stochasticity, hidden information, and opponents that are deliberately trying to mis-represent their current state. Poker has a much larger game space than classic parlour games such as Chess and Backgammon. Evolutionary methods have been shown to find relatively good results in large state spaces, and neural networks have been shown to be able to find solutions to non-linear search problems. In this paper, we present several algorithms for teaching agents how to play No-Limit Texas Hold'em Poker using a hybrid method known as evolving neural networks. Furthermore, we adapt heuristics such as halls of fame and co-evolution to be able to handle populations of Poker agents, which can sometimes contain several hundred opponents, instead of a single opponent. Our agents were evaluated against several benchmark agents. Experimental results show the overall best performance was obtained by an agent evolved from a single population (i.e., with no co-evolution) using a large hall of fame. These results demonstrate the effectiveness of our algorithms in creating competitive No-Limit Texas Hold'em Poker agents.


  1. Barone, L. and While, L. (1999). An adaptive learning model for simplified poker using evolutionary algorithms. In Angeline, P. J., Michalewicz, Z., Schoenauer, M., Yao, X., and Zalzala, A., editors, Proceedings of the Congress on Evolutionary Computation, volume 1, pages 153-160, Mayflower Hotel, Washington D.C., USA. IEEE Press.
  2. Beattie, B., Nicolai, G., Gerhard, D., and Hilderman, R. J. (2007). Pattern classification in no-limit poker: A head-start evolutionary approach. In Canadian Conference on AI, pages 204-215.
  3. Billings, D. (2006). Algorithms and Assessment in Computer Poker. PhD thesis, University of Alberta.
  4. Billings, D., Burch, N., Davidson, A., Holte, R., Schaeffer, J., Schauenberg, T., and Szafron, D. (2003). Approximating game-theoretic optimal strategies for full-scale poker. In Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence (IJCAI).
  5. Billings, D., Davidson, A., Schaeffer, J., and Szafron, D. (2002). The challenge of poker. Artificial Intelligence, 134(1-2):201-240.
  6. Billings, D., Papp, D., Pena, L., Schaeffer, J., and Szafron, D. (1999). Using selective-sampling simulations in poker. In AAAI Spring Symposium on Search Techniques for Problem Solving under Uncertainty and Incomplete Information.
  7. Booker, L. R. (2004). A no limit texas hold'em poker playing agent. Master's thesis, University of London.
  8. Campbell, M., Hoane, A. J., and hsiung Hsu, F. (2002). Deep blue. Artificial Intelligence, 134:57-83.
  9. Donninger, C. and Lorenz, U. (2005). The hydra project. Xcell Journal, 53:94-97.
  10. Hauptman, A. and Sipper, M. (2005). Gp-endchess: Using genetic programming to evolve chess endgame players. In Keijzer, M., Tettamanzi, A., Collet, P., van Hemert, J., and Tomassini, M., editors, Proceedings of the 8th European Conference on Genetic Programming.
  11. Johanson, M. B. (2007). Robust stategies and counterstrategies: Building a champion level computer poker player. Master's thesis, University of Alberta.
  12. Kendall, G. and Whitwell, G. (2001). An evolutionary approach for the tuning of a chess evaluation function using population dynamics. In Proceedings of the 2001 IEEE Congress on Evolutionary Computation, pages 995-1002. IEEE Press.
  13. Lubberts, A. and Miikkulainen, R. (2001). Co-evolving a go-playing neural network. In Proceedings of the GECCO-01 Workshop on Coevolution: Turning Adaptive Algorithms upon Themselves.
  14. Pollack, J. B. and Blair, A. D. (1998). Co-evolution in the successful learning of backgammon strategy. Mach. Learn., 32(3):225-240.
  15. Rosin, C. D. (1997). Coevolutionary Search among adversaries. PhD thesis, University of California, San Diego.
  16. Samuel, A. L. (1959). Some studies in machine learning using the game of checkers. IBM Journal of Research and Development.
  17. Schaeffer, J., Culberson, J., Treloar, N., Knight, B., Lu, P., and Szafron, D. (1992). A world championship caliber checkers program. Artif. Intell., 53(2-3):273-289.
  18. Schauenberg, T. (2006). Opponent modelling and search in poker. Master's thesis, University of Alberta.
  19. Tesauro, G. (2002). Programming backgammon using selfteaching neural nets. Artif. Intell., 134(1-2):181-199.
  20. Thrun, S. (1995). Learning to play the game of chess. In Tesauro, G., Touretzky, D., and Leen, T., editors, Advances in Neural Information Processing Systems 7, pages 1069-1076. The MIT Press, Cambridge, MA.

Paper Citation

in Harvard Style

Nicolai G. and Hilderman R. (2010). ALGORITHMS FOR EVOLVING NO-LIMIT TEXAS HOLD'EM POKER PLAYING AGENTS . In Proceedings of the International Conference on Evolutionary Computation - Volume 1: ICEC, (IJCCI 2010) ISBN 978-989-8425-31-7, pages 20-32. DOI: 10.5220/0003063000200032

in Bibtex Style

author={Garrett Nicolai and Robert Hilderman},
booktitle={Proceedings of the International Conference on Evolutionary Computation - Volume 1: ICEC, (IJCCI 2010)},

in EndNote Style

JO - Proceedings of the International Conference on Evolutionary Computation - Volume 1: ICEC, (IJCCI 2010)
SN - 978-989-8425-31-7
AU - Nicolai G.
AU - Hilderman R.
PY - 2010
SP - 20
EP - 32
DO - 10.5220/0003063000200032