the effective playstyles may change dynamically de-
pending on the playstyle of the majority of the players
in each generation.
6 CONCLUSIONS AND FUTURE
WORK
In this study, we proposed a framework using ge-
netic algorithms and clustering to generate multiple
playstyles for Geister, a two-player imperfect infor-
mation game. Specifically, many agents with genes
are generated as the parameters of a function to guess
the color of the opponent’s piece. While these are
played against each other, agents with high fitness are
obtained. Furthermore, by clustering the genes pos-
sessed by the elites of all generations, we can obtain
our target multiple playstyles. As a result of the exper-
iment, we observed that five playstyles with circular
dominance relationships were generated.
In this experiment, we have considered playstyles
that focused only on the guessing manner; however,
in the future, we would like to generate more di-
verse playstyles determined by the border evaluation
functions. In addition, based on methods such as re-
inforcement learning, we investigated an improved
framework that allowed agents to play better with
characteristic playstyles.
REFERENCES
Brown, N. and Sandholm, T. (2018). Superhuman ai for
heads-up no-limit poker: Libratus beats top profes-
sionals. Science, 359(6374):418–424.
Fan, T., Shi, Y., Li, W., and Ikeda, K. (2019). Position
control and production of various strategies for deep
learning go programs. In 2019 International Confer-
ence on Technologies and Applications of Artificial In-
telligence (TAAI), pages 1–6. IEEE.
Hoover, A. K., Togelius, J., Lee, S., and de Mesentier Silva,
F. (2019). The many ai challenges of hearthstone. KI
- K
¨
unstliche Intelligenz, 34:33–43.
Ishii, R., Ito, S., Ishihara, M., Harada, T., and Thawon-
mas, R. (2018). Monte-carlo tree search implemen-
tation of fighting game ais having personas. In 2018
IEEE Conference on Computational Intelligence and
Games (CIG), pages 1–8. IEEE.
Iwasaki, Y. and Hasebe, K. (2021). Identifying playstyles
in games with neat and clustering. In 2021 IEEE Con-
ference on Games (CoG), pages 1–4. IEEE.
Iwasaki, Y. and Hasebe, K. (2022). A framework for gen-
erating playstyles of game ai with clustering of play
logs. In ICAART (3), pages 605–612.
Lara-Cabrera, R., Nogueira-Collazo, M., Cotta, C., and
Fern
´
andez-Leiva, A. (2015). Game artificial intel-
ligence: Challenges for the scientific community.
CEUR Workshop Proceedings, 1394:1–12.
Michie, D. (1966). Game-playing and game-learning au-
tomata. In Advances in programming and non-
numerical computation, pages 183–200. Elsevier.
Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A.,
Antonoglou, I., Wierstra, D., and Riedmiller, M.
(2013). Playing atari with deep reinforcement learn-
ing. arXiv preprint arXiv:1312.5602.
Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai,
M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D.,
Graepel, T., et al. (2018). A general reinforcement
learning algorithm that masters chess, shogi, and go
through self-play. Science, 362(6419):1140–1144.
Stanley, K. O. and Miikkulainen, R. (2002). Evolving neu-
ral networks through augmenting topologies. Evolu-
tionary computation, 10(2):99–127.
Tampuu, A., Matiisen, T., Kodelja, D., Kuzovkin, I., Korjus,
K., Aru, J., Aru, J., and Vicente, R. (2017). Multiagent
cooperation and competition with deep reinforcement
learning. PloS one, 12(4):e0172395.
Tychsen, A. and Canossa, A. (2008). Defining personas in
games using metrics. In Proceedings of the 2008 con-
ference on future play: Research, play, share, pages
73–80.
ICAART 2023 - 15th International Conference on Agents and Artificial Intelligence
922