6 CONCLUSIONS
We introduced the ‘Heads-up Texas Hold’em Fixed
Limit Bot’ PLICAS based on the computer poker
competition framework. Approaches such as case-
based reasoning, simulation-based bluffing, dynamic
range control, and automated aggression adaption
are integrated into PLICAS. Our research focuses on
the typical advantages of dynamic exploitative ap-
proaches aided by information gathering. The par-
ticipation in the 2010 AAAI Computer Poker Com-
petition (ACPC) showed that the overall performance
PLICAS has a lot of room for improvement. However,
in a differentiated analysis of the 2010 ACPC results,
we find that the performance of poker bots that oper-
ate by using a ε-equilibrium strategy is mostly supe-
rior to poker bots that use opponent modeling strate-
gies. From this point of view, PLICAS, is the sec-
ond best performing participant in the group of op-
ponent modeling poker bots. One way to improve
PLICAS’ performance is to evaluate and optimize
the direct impact of the functional components (bluff
unit, preflop range control, etc.) on the overall play-
ing strength of the poker bot, by switching them on
and off, while avoiding functional interferences of the
modules. With a lot of training and improvement of
the components PLICAS should be a real successful
poker bot in the 2011 ACPC competition.
REFERENCES
Billings, D., Burch, N., Davidson, A., Holte, R., Schauen-
berg, T., Schaeffer, J., and Szafron, D. (2003).
Approximating game-theoretic optimal strategies for
full-scale poker. In Proceedings of the International
Joint Conference on Artificial Intelligence (ICAI’03),
Las Vegas, Nevada, pages 661–668.
Billings, D., Papp, D., Schaeffer, J., and Szafron, D. (1998).
Opponent modeling in poker. In Proceedings of
the Fifteenth National Conference on Artificial In-
telligence (AAAI’98), Madison, WI, pages 493–499.
AAAI Press.
Billings, D., Pena, L., Schaeffer, J., and Szafron, D. (1999).
Using probabilistic knowledge and simulation to play
poker. In Proceedings of the Sixteenth National Con-
ference on Artificial Intelligence (AAAI’99), Orlando,
Florida, pages 697–703.
Davidson, A. (1999). Using artificial neural networks to
model opponents in texas hold’em. Res. Project Re-
view CMPUT 499, Poker Res. Group, Univ. of Al-
berta, CA.
Davidson, A., Billings, D., Schaeffer, J., and Szafron, D.
(2000). Improved opponent modeling in poker. In
Proceedings of the International Conference on Artifi-
cial Intelligence (ICAI’00), Las Vegas, Nevada, pages
493–499.
Gilpin, A., Sorensen, T. B., and Sandholm, T. (2007).
Potential-aware automated abstraction of sequential
games, and holistic equilibrium analysis of texas
hold’em poker. In Proceedings of the National Con-
ference on Artificial Intelligence (AAAI 2007) Vancou-
ver, Canada.
Hamilton, S. and Garber, L. (1997). Deep blue’s hardware-
software synergy. Computer, 30:29–35.
Johanson, M. (2007). Robust strategies and counter-
strategies: Building a champion level computer poker
player. Master’s thesis, University of Alberta.
Koller, D. and Pfeffer, A. (1997). Representations and so-
lutions for game-theoretic problems. Artificial Intelli-
gence, 94:167–215.
Kuhn, H. W. (1950). Simplified two-person poker. In Kuhn,
H. W. and Tucker, A. W., editors, Contributions to the
Theory of Games, volume 1, pages 97–103. Princeton
University Press.
Lockett, A. and Miikkulainen, R. (2008). Evolving oppo-
nent models for texas hold’em. In Proceedings of the
2008 IEEE Conference on Computational Intelligence
in Games. Perth, 2008. IEEE.
Nash, J. F. and Shapley, L. S. (1950). A simple 3-person
poker game. In Kuhn, H. W. and Tucker, A. W., edi-
tors, Contributions to the Theory of Games, volume 1,
pages 105–116. Princeton University Press.
Neumann, J. V. and Morgenstern, O. (1944). Theory of
Games and Economic Behavior. John Wiley.
Rubin, J. and Watson, I. (2009). A memory-based ap-
proach to two-player texas hold’em. In Proceedings of
the 22nd Australasian Joint Conference on Advances
in Artificial Intelligence AI’09, Melbourne, Australia,
pages 465–474. Springer.
Rubin, J. and Watson, I. (2010). Similarity-based re-
trieval and solution re-use policies in the game of
texas hold’em. In Proc. of the 18th Int. Conference
on Case-based Reasoning, ICCBR 2010, Alessandria,
Italy, pages 465–479.
Schaeffer, J. (1997). One jump ahead: challenging human
supremacy in checkers. Springer, NY, USA.
Schauenberg, T. (2006). Opponent modeling and search in
poker. Master’s thesis, University of Alberta, Depart-
ment of Computing Science, Edmonton, Alberta.
Sklansky, D. (1997). Hold’em Poker: A Complete Guide to
Playing the Game. Two Plus Two Publishing, Hen-
derson, NV, USA.
Southey, F., Bowling, M., Larson, B., Piccione, C., Burch,
N., Billings, D., and Rayner, C. (2005). Bayes’ bluff:
Opponent modelling in poker. In Proceedings of the
Twenty-First Conference on Uncertaintyin Artificial
Intelligence (UAI), pages 550–558.
Watson, I. and Rubin, J. (2008). Casper: a case-based
poker-bot. In Proceedings of the 21st Australasian
Joint Conference on Artificial Intelligence AI’08,
Auckland, New Zealand, volume 5360 of Lecture
Notes in Computer Science, pages 594–600. Springer.
Zinkevich, M., Johanson, M., Bowling, M., and Piccione,
C. (2008). Regret minimization in games with incom-
plete information. In Advances in Neural Information
Processing Systems 20 (NIPS), pages 1729–1736.
ICEIS 2011 - 13th International Conference on Enterprise Information Systems
72