ACKNOWLEDGEMENTS
We thank our students for their great engagement and
valuable feedback as well as our department author-
ity for the unconditional support of this didactical ex-
periment and the motivating decision that this module
will become regular part of the computer science cur-
riculum at HSLU. We further greatly appreciated the
short-term provision of prizes for the winning student
teams by Bison Schweiz AG. We finally are much
obliged to Roland Christen for supporting develop-
ment and deployment.
REFERENCES
Allen, J. (2010). The Complete Book of Connect 4: His-
tory, Strategy, Puzzles. Sterling Publishing Company,
Incorporated.
Arnold, R., Langheinrich, M., and Hartmann, W. (2007).
Infotraffic – teaching important concepts of computer
science and math through real-world examples. In
Proceedings of the 38th ACM SIGCSE Technical Sym-
posium, pages 105–109, New York. ACM Press.
Baker, C. (2010). Nimrod, the world’s first gaming com-
puter.
Bezakova, I., Heliotis, J. E., and Strout, S. P. (2013). Board
game strategies in introductory computer science. In
Proceeding of the 44th ACM Technical Symposium
on Computer Science Education, pages 17–22, New
York, NY, USA. ACM Press.
Campbell, M., Hoane, A., and hsiung Hsu, F. (2002). Deep
blue. Artificial Intelligence, 134(1):57 – 83.
Cou
¨
etoux, A., M
¨
uller, M., and Teytaud, O. (2013). Monte
carlo tree search in go.
Coulom, R. (2006). Efficient selectivity and backup opera-
tors in monte carlo tree search. In Proceedings Com-
puters and Games 2006. Springer-Verlag.
Cowling, P. I., Powley, E. J., and Whitehouse, D. (2012). In-
formation set monte carlo tree search. IEEE Transac-
tions on Computational Intelligence and AI in Games,
4(2):120–143.
Guzdial, M. (2010). Does contextualized computing educa-
tion help? ACM Inroads, 1(4):4–6.
Haridy, R. H. R. (2017). 2017: the year ai beat us at all our
own games.
Johnson, G. (1997). To test a powerful computer, play an
ancient game.
Kelly, J. E. and Hamm, S. (2013). Smart Machines:
IBM’s Watson and the Era of Cognitive Computing.
Columbia University Press, New York, NY, USA.
Lien, T. and Borowiec, S. (2016). Alphago beats human go
champ in milestone for artificial intelligence.
McCarthy, J. (1990). Chess as the drosophila of ai. In
Marsland, T. A. and Schaeffer, J., editors, Comput-
ers, Chess, and Cognition, pages 227–237, New York,
NY. Springer New York.
Redheffer, R. (1948). A machine for playing the game nim.
The American Mathematical Monthly, 55(6):343–
349.
Russell, S. and Norvig, P. (2010). Artificial Intelligence:
A Modern Approach. Series in Artificial Intelligence.
Prentice Hall, Upper Saddle River, NJ, third edition.
Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L.,
van den Driessche, G., Schrittwieser, J., Antonoglou,
I., Panneershelvam, V., Lanctot, M., Dieleman, S.,
Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I.,
Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel,
T., and Hassabis, D. (2016). Mastering the game of
Go with deep neural networks and tree search. Na-
ture, 529(7587):484–489.
Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai,
M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D.,
Graepel, T., Lillicrap, T., Simonyan, K., and Hass-
abis, D. (2018). A general reinforcement learning
algorithm that masters chess, shogi, and go through
self-play. Science, 362(6419):1140–1144.
Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I.,
Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M.,
Bolton, A., Chen, Y., Lillicrap, T., Hui, F., Sifre, L.,
van den Driessche, G., Graepel, T., and Hassabis, D.
(2017). Mastering the game of go without human
knowledge. Nature, 550:354 EP –.
Tapola, A., Veermans, M., and Niemivirta, M. (2013). Pre-
dictors and outcomes of situational interest during a
science learning task. Instructional Science, 41:1–18.
Weinstein, Y., Sumeracki, M., and Caviglioli, O. (2018).
Understanding How We Learn: A Visual Guide. Rout-
ledge, 1st edition.
CSEDU 2019 - 11th International Conference on Computer Supported Education
404