and marginal gain. These functions provide a way
to rank the adopted measures as negative or posi-
tive. Our findings are valuable for the design of future
strategies for this and similar courses.
The analysis tools reported on this study provide
a way to analyze the effectiveness of new measures.
According with their corresponding cost-benefit anal-
ysis, the measure can be maintained, tuned or with-
drawn, under the general hypothesis that the current
content of the course as well as the proficiency levels
achieved by students who passed the course should
not be changed.
In particular, for some of the adopted measures, it
becomes clear that the amount of invested resources
(faculty workload) did not justified their impact in the
pass rate. For instance, the substantial overhead of
grading red lights had almost no impact on passing
rate. Other measures did have a positive impact with-
out increasing the workload like the weights given to
the different exams. Moreover, as all the introduced
measures involve continuous assessment, our study
shows that the corresponding workload has reached
its limit.
Some pedagogical strategies around the use of the
online Judge as an automated aid to motivate, help
and evaluate students that are used in this course have
been successfully used also in other courses (such as
Programming-2, Data structures and algorithms, Al-
gorithms, Functional programming, among others). It
could be interesting to extend this kind of analysis to
those courses.
We are aware that the scope of our study could be
extended. We have focused uniquely on the passing
rate but a finer analysis taking into account students
marks and motivation might bring more insights in
the effectiveness of every measure.
REFERENCES
Arrow, K. J. (1962). The economic implications of learn-
ing by doing. The Review of Economic Studies,
29(3):155–173.
Bowles, S. (1970). Towards an educational production
function. In Education, Income, and Human Capital,
pages 9–70. National Bureau of Economic Research.
Gim
´
enez, O., Petit, J., and Roura, S. (2009). Programaci
´
o 1:
A pure problem-oriented approach for a CS1 course.
In Hermann, C., Lauer, T., Ottmann, T., and Welte,
M., editors, Proc. of the Informatics Education Europe
IV (IEE-2009), pages 185–192.
Hanushek, E. A. (2008). Education production functions. In
The New Palgrave Dictionary of Economics. Palgrave
Macmillan.
Hardy, G. (1940). A Mathematicians Apology. Cambridge.
Reprinted with Foreword by C.P. Snow 1967. Cam-
bridge University Press, Canto Edition, 1992.
Ihantola, P., Ahoniemi, T., Karavirta, V., and Sepp
¨
al
¨
a, O.
(2010). Review of recent systems for automatic as-
sessment of programming assignments. In Procs.
of the 10th Koli Calling International Conference on
Computing Education Research, pages 86–93. ACM.
Mart
´
ın-Carrasco, F. J., Granados, A., Santillan, D., and Me-
diero, L. (2014). Continuous assessment in civil engi-
neering education - yes, but with some conditions. In
CSEDU 2014 - Proceedings of the 6th International
Conference on Computer Supported Education, Vol-
ume 2, Barcelona, pages 103–109. SciTePress.
Petit, J., Gim
´
enez, O., and Roura, S. (2012). Jutge.org: an
educational programming judge. In Proceedings of
the 43rd ACM technical symposium on Computer sci-
ence education, SIGCSE 2012, pages 445–450.
Revilla, M., Manzoor, S., and Liu, R. (2008). Competitive
learning in informatics: The UVa online judge experi-
ence. Olympiads in Informatics, 2:131–148.
Solow, R. M. (1997). Learning from ‘Learning by Doing’
Lessons for Economic Growth. Stanford University
Press. Series: Kenneth J. Arrow Lectures.
Stiglitz, J. E. and Greenwald, B. C. (2014). Creating a
Learning Society: A New Approach to Growth, De-
velopment, and Social Progress. Columbia University
Press. Series: Kenneth J. Arrow Lectures.
Tonin, N., Zanin, F., and Bez, J. (2012). Enhancing tradi-
tional algorithms classes using URI online judge. In
2012 International Conference on e-Learning and e-
Technologies in Education, pages 110–113.
Varian, H. R. (2005). Intermediate Microeconomics: A
Modern Approach (7th Edition). W. W. Norton and
Company.
Verd
´
u, E., Regueras, L. M., Verd
´
u, M. J., Leal, J. P., de Cas-
tro, J. P., and Queir
´
os, R. (2012). A distributed system
for learning programming on-line. Computers & Ed-
ucation, 58(1):1 – 10.
A TECHNICAL DETAILS
In this appendix we calculate the amount of working hours
per task at each stage of our course based on the 14 answers
that we obtained by surveying current instructors and ex-
trapolating these values to previous timestamps. Note that
most measures, once taken, remain in force, thus the work-
loads involved are accumulated.
t
0
Kick-off (2006–2007): The course started with two
kind of lectures, theory and practical, of 3 hours per
week each. Theory lectures were given to groups of
60 students, and the survey says that in average it takes
1.5 hours to prepare one hour of theory lectures. This
results in a total of 2.5 hours of work (preparation +
lecturing). Since the course is 15 weeks long, we have:
ACost-benefitAnalysisofContinuousAssessment
65