On the other hand, if we pay attention to the marks
obtained in the theoretical part of the exam, marks are
overall lower than the lab exam’s, but we can still see
a difference of 1.2 points. This margin is almost half
compared to the one in the lab exam, which suggests
a small causal effect, given that the gap is widened in
the area the contest was supposed to push students to
do better.
We also analysed the marks obtained against the
number of test cases solved, but we did not find a sta-
tistically significant correlation.
Other studies have found a small change in ab-
solute grade and grade distribution, but, most im-
portantly, an improvement in self-reported metrics
such as perceived difficulty of the subject, familiar-
ity, proactivity in class and effort dedicated (Ban-
deira et al., 2019). Furthermore, participation in pro-
gramming contests is seen as productive and career-
building (Raman et al., 2018).
5 RECOMMENDATIONS FOR
FUTURE EDITIONS
Preparing and managing a programming contest can
be a daunting task, but it does not have to be.
The first obstacle is designing and selecting the
problem set, and the test cases to determine whether
a solution is valid or not. Problems can be sourced
from exercise sets from the subject itself, more com-
plicated versions of those exercises, or new problems
altogether; according to the level of complexity de-
sired and how much time we want the students to ded-
icate to it.
Then, the type of contest and scoring system
must be established. This includes automating as
much of the submission and scoring process as possi-
ble. When organizing a binary contest, software like
DOMjudge
9
can be very useful, as it is freely avail-
able and allows the organizers to setup a full com-
petition (Kinkhorst, 2014), including judging submis-
sions automatically as they are received and generat-
ing a live ranking. For binary contests with partial
scores (DOMjudge only gives pass/fail results), Con-
test Management System (CMS)
10
is a viable alter-
native. There are many other judge software suites
(Wasik et al., 2018), as any large enough competi-
tion adapts or creates a software judge to fit their
needs. Both DOMjudge and CMS may not be flexi-
ble enough, as they judge based on outputs for a given
input, and cannot evaluate behaviour like reading and
9
https://www.domjudge.org/
10
https://cms-dev.github.io/
writing files, so we recommend, if possible, adapting
the judge to the contest’s pedagogical requirements
(Bowring, 2008). Regardless, a fully automated judge
is most desirable, because it gives feedback to partici-
pants instantly, letting them make submissions when-
ever suits them best, and allowing them to issue fixes
to incorrect submissions.
Lastly, there must be some incentive for students
to participate, such as including its result in the grad-
ing of the subject or by giving out prizes for the win-
ner(s).
It is not necessary to generate personalized feed-
back for each participant, but giving feedback and
hints regarding problems that are specially difficult
can be motivating for participants who have not yet
comprehended it. In the end, problems that are un-
reachable to all generate a lack of interest in the event.
The contest must also be monitored for the en-
tire duration, as unexpected failures can appear in
test cases, for which updated test cases should be
promptly loaded to the judge system and a notifica-
tion be given to participants.
6 CONCLUSIONS
This work describes the implementation of a pro-
gramming contest in a second-year subject of the
Bachelor’s Degree in Informatics Engineering. It
details the implementation of a pilot study con-
test and gives recommendations for organizing fu-
ture events. The pilot study included 18 participants
across two lab groups. The results show that partici-
pants achieved much better results in later exams than
students who did not take part, which points to a pos-
sible use of contests and competitions as reinforce-
ment or additional activities outside the classroom.
However, the results are not conclusive enough due
to the small sample size. It is our intention to repeat
this experiment in larger groups.
REFERENCES
Bandeira, I. N., Machado, T. V., Dullens, V. F., and Canedo,
E. D. (2019). Competitive programming: A teaching
methodology analysis applied to first-year program-
ming classes. In 2019 IEEE Frontiers in Education
Conference (FIE), pages 1–8.
Bowring, J. F. (2008). A new paradigm for programming
competitions. SIGCSE Bull., 40(1):87–91.
Comb
´
efis, S., Beresnevi
ˇ
cius, G., and Dagien
˙
e, V. (2016).
Learning programming through games and contests:
Overview, characterisation and discussion. Olympiads
in Informatics, 10:39–60.
CSEDU 2024 - 16th International Conference on Computer Supported Education
616