the winning rate of the proposed method. This figure
demonstrates that the proposed method is as strong
as or stronger than almost all other agents, regard-
less of which deck is used. In particular, the pro-
posed method outperforms AlvaroAgent, which has
the same state evaluation function, by an average of
53%, suggesting that the proposed method is equally
or even better than AlvaroAgent at searching for ac-
tions. On the other hand, when using AggroPirate-
Warrior, the win rate was 40% against MCGS. This
is most likely because MCGS has an ingenious way
of pruning options even when conducting playouts,
which was more powerful than the state evaluation
function for the relatively simple deck of AggroPi-
rateWarrior, which was able to correctly evaluate the
state of the game.
6 CONCLUSIONS AND FUTURE
WORK
In this paper, we proposed a method for decision-
making for actions in Hearthstone based on the
RHEA. To apply the RHEA to Hearthstone, we im-
proved the original algorithm by introducing tech-
niques of genetic manipulation, utilizing past search
information, and filtering action options.
We implemented agents based on the proposed
method and the original RHEA to evaluate the ef-
fectiveness of the proposed method. The results
showed that the winning rate for each improvement
was higher than that without the improvement. Fur-
thermore, our agent played against the top-performing
agents in past Hearthstone AI Competitions and out-
performed most of them.
In future work, we would like to investigate the
performance improvement by parameter tuning be-
cause the proposed method has various hyperparame-
ters. We are also interested in improving search effi-
ciency. Specifically, we shall attempt to take over bet-
ter individuals found during the previous search steps
instead of generating the initial group only at random.
REFERENCES
Blizzard Entertainment (n.d.). Hearthstone official web-
site. Retrieved November 24, 2022, from https://
playhearthstone.com.
Browne, C. B., Powley, E., Whitehouse, D., Lucas, S. M.,
Cowling, P. I., Rohlfshagen, P., Tavener, S., Perez,
D., Samothrakis, S., and Colton, S. (2012). A survey
of monte carlo tree search methods. IEEE Transac-
tions on Computational Intelligence and AI in games,
4(1):1–43.
Bursztein, E. (2016). I am a legend: Hacking hearth-
stone using statistical learning methods. In 2016 IEEE
conference on computational intelligence and games
(CIG), pages 1–8. IEEE.
Choe, J. S. B. and Kim, J.-K. (2019). Enhancing monte
carlo tree search for playing hearthstone. In 2019
IEEE Conference on Games (CoG), pages 1–7. IEEE.
Dockhorn, A., Frick, M., Akkaya,
¨
U., and Kruse, R.
(2018). Predicting opponent moves for improving
hearthstone ai. In International Conference on Infor-
mation Processing and Management of Uncertainty in
Knowledge-Based Systems, pages 621–632. Springer.
Dockhorn, A., Hurtado-Grueso, J., Jeurissen, D., Xu, L.,
and Perez-Liebana, D. (2021). Portfolio search and
optimization for general strategy game-playing. In
2021 IEEE Congress on Evolutionary Computation
(CEC), pages 2085–2092. IEEE.
Dockhorn, A. and Mostaghim, S. (2019). Introduc-
ing the hearthstone-ai competition. arXiv preprint
arXiv:1906.04238.
Gaina, R. D., Lucas, S. M., and Perez-Liebana, D. (2017).
Rolling horizon evolution enhancements in general
video game playing. In 2017 IEEE Conference on
Computational Intelligence and Games (CIG), pages
88–95. IEEE.
Hearthstone Top Decks (n.d.). Retrieved November 24,
2022, from https://www.hearthstonetopdecks.com/.
Justesen, N., Mahlmann, T., and Togelius, J. (2016). Online
evolution for multi-action adversarial games. In Euro-
pean Conference on the Applications of Evolutionary
Computation, pages 590–603. Springer.
Kocsis, L. and Szepesv
´
ari, C. (2006). Bandit based monte-
carlo planning. In European conference on machine
learning, pages 282–293. Springer.
Perez, D., Samothrakis, S., Lucas, S., and Rohlfshagen, P.
(2013). Rolling horizon evolution versus tree search
for navigation in single-player real-time games. In
Proceedings of the 15th annual conference on Genetic
and evolutionary computation, pages 351–358.
Perez Liebana, D., Dieskau, J., Hunermund, M.,
Mostaghim, S., and Lucas, S. (2015). Open loop
search for general video game playing. In Proceed-
ings of the 2015 Annual Conference on Genetic and
Evolutionary Computation, pages 337–344.
´
Swiechowski, M., Tajmajer, T., and Janusz, A. (2018). Im-
proving hearthstone ai by combining mcts and su-
pervised learning algorithms. In 2018 IEEE Con-
ference on Computational Intelligence and Games
(CIG), pages 1–8. IEEE.
Wang, D. and Moh, T.-S. (2019). Hearthstone ai: Oops to
well played. In Proceedings of the 2019 ACM South-
east Conference, pages 149–154.
Zhang, S. and Buro, M. (2017). Improving hearthstone ai
by learning high-level rollout policies and bucketing
chance node events. In 2017 IEEE Conference on
Computational Intelligence and Games (CIG), pages
309–316. IEEE.
ICAART 2023 - 15th International Conference on Agents and Artificial Intelligence
852