lected agents are classified with clustering, generat-
ing playstyles. We applied our proposed framework
to the roguelike game and demonstrated that multiple
playstyles were generated.
In future studies, we will verify the effectiveness
of the proposed framework by adopting it on more
complex games, such as Super Mario Bros., consist-
ing of many possible actions and states. We are also
interested in using a genetic algorithm called Qual-
ity Diversity (Pugh et al., 2016) in our framework to
make more effective and expressive the playstyle gen-
eration, especially MAP-Elites (Mouret and Clune,
2015).
REFERENCES
Brown, N. and Sandholm, T. (2019). Superhuman ai for
multiplayer poker. Science, 365(6456):885–890.
Drachen, A., Canossa, A., and Yannakakis, G. N. (2009).
Player modeling using self-organization in tomb
raider: Underworld. In 2009 IEEE symposium on
computational intelligence and games, pages 1–8.
IEEE.
Fan, T., Shi, Y., Li, W., and Ikeda, K. (2019). Posi-
tion control and production of various strategies for
deep learning go programs. International Conference
on Technologies and Applications of Artificial Intelli-
gence, pages 1–6.
Holmgård, C., Green, M. C., Liapis, A., and Togelius, J.
(2019). Automated playtesting with procedural per-
sonas through mcts with evolved heuristics. Transac-
tions on Games, 11:352–362.
Holmgård, C., Liapis, A., Togelius, J., and Yannakakis,
G. N. (2014). Evolving personas for player decision
modeling. In Conference on Computational Intelli-
gence and Games, pages 1–8.
Holmgård, C., Liapis, A., Togelius, J., and Yannakakis,
G. N. (2016). Evolving models of player decision
making: Personas versus clones. Entertainment Com-
puting, 16:95–104.
Ikeda, K. and Viennot, S. (2013). Production of various
strategies and position control for monte-carlo go —
entertaining human players. Conference on Computa-
tional Inteligence in Games, pages 1–8.
Ishii, R., Ito, S., Ishihara, M., Harada, T., and Thawonmas,
R. (2018). Monte-carlo tree search implementation of
fighting game ais having personas. In Conference on
Computational Intelligence and Games, pages 1–8.
Leyton-Brown, K. and Shoham, Y. (2008). Essentials of
Game Theory: A Concise Multidisciplinary Introduc-
tion. Morgan and Claypool Publishers.
Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A.,
Antonoglou, I., Wierstra, D., and Riedmiller, M.
(2013). Playing atari with deep reinforcement learn-
ing. arXiv:1312.5602.
Mouret, J.-B. and Clune, J. (2015). Illuminating search
spaces by mapping elites. arXiv:1504.04909.
Ortega, J., Shaker, N., Togelius, J., and Yannakakis, G. N.
(2013). Imitating human playing styles in super mario
bros. Entertainment Computing, 4:93–104.
Osborne, M. J. and Rubinstein, A. (1994). A course in game
theory. MIT press.
Pelleg, D. and Moore, A. W. (2000). X-means: Extend-
ing k-means with efficient estimation of the number
of clusters. In Proceedings of the International Con-
ference on Machine Learning, pages 727–734.
Pugh, J. K., Soros, L. B., and Stanley, K. O. (2016). Qual-
ity diversity: A new frontier for evolutionary compu-
tation. Frontiers in Robotics and AI, 3.
Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai,
M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D.,
Graepel, T., et al. (2017). Mastering chess and shogi
by self-play with a general reinforcement learning al-
gorithm. arXiv:1712.01815.
Srinivas, M. and Patnaik, L. M. (1994). Genetic algorithms:
A survey. Computer, 27(6):17–26.
Stanley, K. O. and Miikkulainen, R. (2002). Evolving neu-
ral networks through augmenting topologies. Evolu-
tionary Computation, 10:99–127.
Tampuu, A., Matiisen, T., Kodelja, D., Kuzovkin, I., Korjus,
K., Aru, J., Aru, J., and Vicente, R. (2017). Multiagent
cooperation and competition with deep reinforcement
learning. PLoS ONE, 12:1–15.
Tychsen, A. and Canossa, A. (2008). Defining personas in
games using metrics. In Conference on future play,
pages 73–80.
Yannakakis, G. N., Spronck, P., Loiacono, D., and André,
E. (2013). Player modeling. In Artificial and Compu-
tational Intelligence in Games. Dagstuhl Publishing.
ICAART 2022 - 14th International Conference on Agents and Artificial Intelligence
612