sophisticated or complete answers.
The extra credit contributes to the additive part
of grading. Although an additive weighted average is
probably the most used form for grading calculation,
it is also interesting to apply a subtractive part. This
allows a set of penalties for demanded ”minimal” re-
quirements for which there is no acceptable reason
not to fulfill them, e.g., style rules, submission date,
or the use of some specific library. The previously
mentioned bonus provides a complement for these ad-
ditive and subtractive parts.
A crucial aspect in large classes is the possibility
of machine-based assessment by automatic or semi-
automatic grading. Nevertheless, the tools, usually
test-driven based or inspired by contest tools, often
do not give feedback and are challenging to adapt to
specific needs (Keuning et al., 2016; Ahoniemi and
Reinikainen, 2006).
Besides the traditional teacher assessment (either
for the individual assessment of group assessment
of students) the possibility of peer assessment should
be considered (e.g, (Indriasari et al., 2020)).
Finally, although a n-ary scale is by far the more
commonly used, a binary scale, where each task is
graded as passed or failed, can provide a good basis
for repeated submissions of the same assignment, thus
fostering deeper learning. This is especially true if
adequate feedback is given for each submission.
3 CONCLUSIONS
Assessment choices are plenty, and programming
courses open even more possibilities due to the pos-
sibility of practical machine support. The presented
guide provides a basis for reflection and is easily
adapted to specific course needs and teachers’ pref-
erences. A vast body of knowledge on teaching and
learning computer programming should be taken into
account when designing courses. This paper and the
presented guide are contributions in that direction.
REFERENCES
Ahoniemi, T. and Reinikainen, T. (2006). Aloha - a grad-
ing tool for semi-automatic assessment of mass pro-
gramming courses. In Proceedings of the 6th Baltic
Sea Conference on Computing Education Research:
Koli Calling 2006, Baltic Sea ’06, page 139–140, New
York, NY, USA. Association for Computing Machin-
ery.
Barros, J. P. (2018). Students’ perceptions of paper-
based vs. computer-based testing in an introductory
programming course. In CSEDU 2018-Proceedings
of the 10th International Conference on Computer
Supported Education, volume 2, pages 303–308.
SciTePress.
Bennedsen, J. and Caspersen, M. E. (2007). Failure
rates in introductory programming. SIGCSE Bull.,
39(2):32–36.
Bennedsen, J. and Caspersen, M. E. (2019). Failure rates
in introductory programming: 12 years later. ACM
Inroads, 10(2):30–36.
Biggs, J. and Tang, C. (2011). Teaching for Quality Learn-
ing at University. Open University Press, 4 edition.
Brown, S., Race, P., and Smith, B. (2004). 500 Tips on
Assessment. Routledge, 2 edition.
Cigas, J., Decker, A., Furman, C., and Gallagher, T. (2018).
How am i going to grade all these assignments? think-
ing about rubrics in the large. In Proceedings of the
49th ACM Technical Symposium on Computer Science
Education, SIGCSE ’18, page 543–544, New York,
NY, USA. Association for Computing Machinery.
Daly, C. and Waldron, J. (2004). Assessing the as-
sessment of programming ability. SIGCSE Bull.,
36(1):210–213.
de Raadt, M. (2012). Student created cheat-sheets in ex-
aminations: Impact on student outcomes. In Pro-
ceedings of the Fourteenth Australasian Computing
Education Conference - Volume 123, ACE ’12, page
71–76, AUS. Australian Computer Society, Inc.
Fitzgerald, S., Hanks, B., Lister, R., McCauley, R., and
Murphy, L. (2013). What are we thinking when we
grade programs? In Proceeding of the 44th ACM
Technical Symposium on Computer Science Educa-
tion, SIGCSE ’13, page 471–476, New York, NY,
USA. Association for Computing Machinery.
Garg, M. and Goel, A. (2022). A systematic literature
review on online assessment security: Current chal-
lenges and integrity strategies. Computers & Security,
113:102544.
Hwang, C. J. and Gibson, D. E. (1982). Using an effective
grading method for preventing plagiarism of program-
ming assignments. SIGCSE Bull., 14(1):50–59.
Indriasari, T. D., Luxton-Reilly, A., and Denny, P. (2020). A
review of peer code review in higher education. ACM
Trans. Comput. Educ., 20(3).
Kalogeropoulos, N., Tzigounakis, I., Pavlatou, E. A., and
Boudouvis, A. G. (2013). Computer-based assess-
ment of student performance in programing courses.
Computer Applications in Engineering Education,
21(4):671–683.
Keuning, H., Jeuring, J., and Heeren, B. (2016). Towards
a systematic review of automated feedback genera-
tion for programming exercises. In Proceedings of
the 2016 ACM Conference on Innovation and Technol-
ogy in Computer Science Education, ITiCSE ’16, page
41–46, New York, NY, USA. Association for Comput-
ing Machinery.
Luxton-Reilly, A., Becker, B. A., Cao, Y., McDermott, R.,
Mirolo, C., M
¨
uhling, A. M., Petersen, A., Sanders, K.,
Simon, and Whalley, J. L. (2017). Developing assess-
ments to determine mastery of programming funda-
Assessment for Computer Programming Courses: A Short Guide for the Undecided Teacher
553