students who tried it could enter a player into the tour-
nament, i.e., their player could win the dummy and
therefore, they passed the project. Moreover, since
to keep alive during the tournament provides extra
points for the course’s grade, this activity really helps
several students to pass the course (about a 20% of
students pass the whole course because of this extra
point).
In order to support these statements, last semester
we applied a survey to our students. Figures 2 and 3
show the results of each of the issues on which we
consulted students. The range of answers was from
1 (strongly disagree) to 5 (strongly agree). From the
surveys, we can infer that almost all students enjoy
the game activity, that they do not find too many prob-
lems in beating the dummy player, and that they pre-
fer it to more standard assignments. Also, they like
to compete against each other. Therefore, we have
corroborated that presenting the project as a game has
had a great impact on teaching. Moreover, this prac-
tice model is very attractive and motivatingto our stu-
dents.
On the other hand, one could think about some
correlation between the ranking of the students in the
tournament and their final grades in the course. Fig-
ure 4 shows this information, and the Spearman’srank
correlation concludes that although the correlation is
weak (Rho = 0.38), the correlation is statistically sig-
nificant (p-value = 2.757e− 05).
As said, during the grand final, the programmers
of the best players explain their strategies. According
to their explanations, in a few cases it is possible to
reach the grand final with just a very simple player,
but in general, winners implement smart and compli-
cated strategies, with sophisticated algorithms.
8 DISCUSSION
Along this work we have described our experience in-
troducing programming computer strategies for com-
puter games into typical CS2 courses. We have shown
evidence on how this kind of programming activity is
fun and highly motivating to computer science and
mathematics students. This motivation also encour-
ages professors, and thus facilitates a pleasant work-
ing environment.
As a weak point of such a project, one can think
that students seem to be so hooked that they spend
more than the recommended hours to program and
improve their players, and this in detriment of the
hours they should dedicate to other parts of the course
and to other courses. But, when surveyed about this
issue, they did claim that they dedicated a big amount
of time and work, but that this was mostly in their free
personal time, which otherwise they would not have
dedicated to study.
Overall, we consider it such a successful experi-
ence that we want to continue improving and spread-
ing. Among the ideas we have to go further, it is worth
mentioning that the games we have created so far are
very competitive. So, in the future we would also
like to develop collaborative games in which players
should help each other in some way. On the other
hand, we want to extend our online system to offer
these games to the general public. A systematization
of the website and of the design process of new games
is under current development.
ACKNOWLEDGEMENTS
We thank Omer Gim´enez and Mario G. Munz´on for
all their enthusiasm, ideas and programming time.
REFERENCES
Cheang, B., Kurnia, A., Lim, A., and Oon, W.-C. (2003).
On automated grading of programming assignments
in an academic institution. Computers & Education,
41(2):121–131.
Douce, C., Livingstone, D., and Orwell, J. (2005). Auto-
matic test-based assessment of programming: A re-
view. ACM Journal on Educational Resources in
Computing, 5(3).
Foriˇsek, M. (2006). Security of Programming Contest Sys-
tems. In Dagiene, V. and Mittermeir, R., editors, In-
formation Technologies at School, pages 553–563.
Gim´enez, O., Petit, J., and Roura, S. (2012). Jutge.org: An
educational programming judge. In Proc. of the 43rd
ACM Technical Symposium on Computer Science Ed-
ucation (SIGCSE-2012), pages 445–450. Association
for Computing Machinery.
Joy, M., Griffiths, N., and Boyatt, R. (2005). The BOSS on-
line submission and assessment system. ACM Journal
on Educational Resources in Computing, 5(3).
Kosowski, A., Malafiejski, M., and Noinski, T. (2007). Ap-
plication of an online judge & contester system in aca-
demic tuition. In ICWL’07, pages 343–354.
Kurnia, A., Lim, A., and Cheang, B. (2001). Online judge.
Computers & Education, pages 299–315.
Prechelt, L., Malpohl, G., and Philippsen, M. (2007). JPlag:
finding plagiarisms among a set of programs.
Revilla, M., Manzoor, S., and Liu, R. (2008). Competitive
learning in informatics: The UVa online judge experi-
ence. Olympiads in Informatics, 2:131–148.
Saikkonen, R., Malmi, L., and Korhonen, A. (2001).
Fully automatic assessment of programming exer-
cises. ACM SIGCSE Bulletin, 33(3):133–136.
CSEDU2013-5thInternationalConferenceonComputerSupportedEducation
442