ture importance and thus feature selection (Breiman,
2001), that can be extremely valuable when designing
the state and action vectors. It would also be of inter-
est to assess the suitability of other algorithms such as
deep neural networks in place of extremely random-
ized trees.
ACKNOWLEDGEMENTS
Raphael Fonteneau is a postdoctoral fellow of the
F.R.S.-FNRS from which he acknowledges financial
support. Antonio Sutera is a PhD fellow of the FRIA
from which he acknowledges financial support.
REFERENCES
Bauckhage, C., Thurau, C., and Sagerer, G. (2003). Learn-
ing human-like opponent behavior for interactive
computer games. In Pattern Recognition, pages 148–
155. Springer.
Breiman, L. (2001). Random forests. Machine learning,
45(1):5–32.
Browne, C. B., Powley, E., Whitehouse, D., Lucas, S. M.,
Cowling, P. I., Rohlfshagen, P., Tavener, S., Perez, D.,
Samothrakis, S., and Colton, S. (2012). A survey of
monte carlo tree search methods. Computational In-
telligence and AI in Games, IEEE Transactions on,
4(1):1–43.
Bunescu, R., Ge, R., Kate, R. J., Marcotte, E. M., Mooney,
R. J., Ramani, A. K., and Wong, Y. W. (2005). Com-
parative experiments on learning information extrac-
tors for proteins and their interactions. Artificial intel-
ligence in medicine, 33(2):139–155.
Cowling, P. I., Ward, C. D., and Powley, E. J. (2012). En-
semble determinization in monte carlo tree search for
the imperfect information card game magic: The gath-
ering. Computational Intelligence and AI in Games,
IEEE Transactions on, 4(4):241–257.
Craven, J. B. M. (2005). Markov networks for detecting
overlapping elements in sequence data. Advances in
Neural Information Processing Systems, 17:193.
Davis, J. and Goadrich, M. (2006). The relationship be-
tween precision-recall and roc curves. In Proceed-
ings of the 23rd international conference on Machine
learning, pages 233–240. ACM.
Frandsen, F., Hansen, M., Sørensen, H., Sørensen, P.,
Nielsen, J. G., and Knudsen, J. S. (2010). Predict-
ing player strategies in real time strategy games. PhD
thesis, Masters thesis.
Gemine, Q., Safadi, F., Fonteneau, R., and Ernst, D. (2012).
Imitative learning for real-time strategy games. In
Computational Intelligence and Games (CIG), 2012
IEEE Conference on, pages 424–429. IEEE.
Geurts, P., Ernst, D., and Wehenkel, L. (2006). Extremely
randomized trees. Machine learning, 63(1):3–42.
Goadrich, M., Oliphant, L., and Shavlik, J. (2004).
Learning ensembles of first-order clauses for recall-
precision curves: A case study in biomedical infor-
mation extraction. In Inductive logic programming,
pages 98–115. Springer.
Gorman, B. and Humphrys, M. (2007). Imitative learning
of combat behaviours in first-person computer games.
Proceedings of CGAMES.
Hanley, J. A. and McNeil, B. J. (1982). The meaning and
use of the area under a receiver operating characteris-
tic (roc) curve. Radiology, 143(1):29–36.
Lee, C.-S., Wang, M.-H., Chaslot, G., Hoock, J.-B., Rim-
mel, A., Teytaud, F., Tsai, S.-R., Hsu, S.-C., and
Hong, T.-P. (2009). The computational intelligence of
mogo revealed in taiwan’s computer go tournaments.
Computational Intelligence and AI in Games, IEEE
Transactions on, 1(1):73–89.
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V.,
Thirion, B., Grisel, O., Blondel, M., Prettenhofer,
P., Weiss, R., Dubourg, V., Vanderplas, J., Passos,
A., Cournapeau, D., Brucher, M., Perrot, M., and
Duchesnay, E. (2011). Scikit-learn: Machine learning
in Python. Journal of Machine Learning Research,
12:2825–2830.
Provost, F. J., Fawcett, T., and Kohavi, R. (1998). The case
against accuracy estimation for comparing induction
algorithms. In ICML, volume 98, pages 445–453.
Rimmel, A., Teytaud, F., Lee, C.-S., Yen, S.-J., Wang,
M.-H., and Tsai, S.-R. (2010). Current frontiers in
computer go. Computational Intelligence and AI in
Games, IEEE Transactions on, 2(4):229–238.
Safadi, F., Fonteneau, R., and Ernst, D. (2015). Arti-
ficial intelligence in video games: Towards a uni-
fied framework. International Journal of Computer
Games Technology, 2015.
Sailer, F., Buro, M., and Lanctot, M. (2007). Adversarial
planning through strategy simulation. In Computa-
tional Intelligence and Games, 2007. CIG 2007. IEEE
Symposium on, pages 80–87. IEEE.
Soemers, D. (2014). Tactical planning using mcts in the
game of starcraft1. Master’s thesis, Maastricht Uni-
versity.
Sutera, A. (2013). Characterization of variable importance
measures derived from decision trees. Master’s thesis,
University of Li
`
ege.
van den Herik, H. J. (2010). The drosophila revisited. ICGA
journal, 33(2):65–66.
Ward, C. D. and Cowling, P. I. (2009). Monte carlo search
applied to card selection in magic: The gathering. In
Computational Intelligence and Games, 2009. CIG
2009. IEEE Symposium on, pages 9–16. IEEE.
Decision Making from Confidence Measurement on the Reward Growth using Supervised Learning - A Study Intended for Large-scale
Video Games
271