the goal is then to propose, at the same time, the same
exercise to the opponents and to compare their results
in mark and time to reach that mark.
5.2 Epsilon-better Peeping
The second incentive is to propose to students hav-
ing obtained a mark m and eager to progress, to peep
two other students’ submissions with slightly better
marks i.e., m + ε. This is what we called “Epsilon-
better peeping”. This will favour reading other’s code,
another important skill worth stressing since begin-
ners often think that they code for computers and not
for humans! The slightly better mark may have been
obtained by a better definition of the function or by
better tests. Both allow to improve students’ work.
Reading these others’ submission carefully and deter-
mine why they are better may be eye-opening.
To prevent students to just copy-paste better solu-
tions, we will limit the number of times students may
peep at other’s submissions.
For students who stick to very low marks, we will
probably have to set ε to a bigger value. If a huge
number of students attend this MOOC, we may try
various settings for this parameter to help students
climb the first step.
5.3 Recommendation
After being served other’s submissions, students will
have to tell whether one of these other’s submission
was useful or not. This is a kind of recommendation
system (or crowd ranking) from which the best help-
ing submissions should emerge. However, differently
from recommandation systems where a huge number
of persons recommand a few items (movies for in-
stance) here, we have a few students producing a huge
number of submissions. Therefore to select the most
appropriate submissions is a real challenge.
What we envision is to ask the teaching assistants
to write a set of programs with increasing marks and,
for the first edition of the MOOC, to favour these pro-
grams. This also solves the bootstrap problem since
there must be other’s submissions in order to imple-
ment this incentive.
Accumulating students’ submissions should allow
to elaborate a taxonomy of programs and errors. This
taxonomy will help improving grading reports. Re-
ports may include hints triggered by the kind of rec-
ognized error. The recommandation system that se-
lects the best helping submissions, may also use that
taxonomy. But this taxonomy will only be taken into
account for the next edition of the MOOC.
6 RELATED WORK
Mechanical grading has been used for many years in
many different contexts and programming languages.
However generic architectures such as (Striewe et al.,
2009), that support multiple languages, that are scal-
able and robust are not so common. Our infrastructure
is one of them.
The estimation of students’ skill is also a well
studied domain (Heiner et al., 2004). Many works
exist that try to characterize the student’s model that
is, its shape and its parameters (Jonsson et al., 2005)
(Cen et al., 2006). They often start from an analy-
sis relating exercises and the involved primitive skills
then, they observe students’ progress (mining the
logs) in order to determine the parameters that best
fit the model mainly with the “expectation maximiza-
tion” technique (Ferguson, 2005).
The previous studies use far more information
than us since they mine the logs of an intelligent tu-
tor system where are recorded which exercise is deliv-
ered, how long the student read the stem, what help he
requires, etc. By contrast, our grading infrastructure
only gives us access to marks. Our set of proposed
exercises is not (yet) related to the involved skills nor
the set of skills is clearly stated. Therefore we are cur-
rently more interested to provide incentives to work in
pairs with an attractive but rigorous feedback.
While recommendation systems are legion, to rec-
ommend the slightly better programs that helped to
progress may be an interesting idea. We will see if
our MOOC stands up to its promises.
7 FINAL REMARKS
In this paper, we present some ideas that are currently
under development for a MOOC teaching recursive
programming for beginners. This MOOC will start in
March 2014 hence results are not yet known.
However and as far as we know, the conjunction
of a grading machinery, a skill ranking algorithm and
a recommendation system for help seems to be inno-
vative and worth studying.
REFERENCES
Beck, K. (2000). eXtreme Programming. http://en.
wikipedia.org/wiki/Extreme_programming.
Beck, K. and Gamma, E. (2012). The JUnit framework,
v4.11. http://junit.org/.
Brygoo, A., Durand, T., Manoury, P., Queinnec, C., and
Soria, M. (2002). Experiment around a training en-
CSEDU2014-6thInternationalConferenceonComputerSupportedEducation
244