potentially be used for APAAS specific grading
injections.
To handle these needs it seems necessary
1. to randomise test cases,
2. to provide additional practical code inspection
techniques based on parsers,
3. and to isolate the submission and the evaluation
logic consequently in separate processes (at least
the console output should be separated).
As far as the author oversees the APAAS land-
scape, exactly these features are only incompletely
provided. APAAS solutions are a valuable asset to
support practical programming courses to minimise
routine work for advisers and to provide immediate
feedback for students. However, as was shown, these
systems can be cheated quite easily. If we would use
them – for instance in highly hyped MOOC formats –
for automatic certification of programming expertise,
the question arises whether we would certificate the
expertise to program or to cheat and what would
this question mean for the reputation of these courses
(Alraimi et al., 2015)?
In consequence, we should look at APAAS so-
lutions much more from a security point of view –
in particular from a code injection point of view.
We identified the need to evolve unit testing frame-
works into more evaluation-oriented teaching solu-
tions. Based on the insights of this study we are
currently working on a Java-based unit testing frame-
work intentionally focusing educational contexts. Its
working state can be inspected on GitHub (https:
//github.com/nkratzke/JEdUnit).
ACKNOWLEDGEMENTS
The author expresses his gratitude to Vreda Pieterse,
Philipp Angerer, and all the anonymous reviewers for
their valuable feedback and paper improvement pro-
posals. What is more, posting a preprint of this pa-
per to the Reddit programming community had some
unique and unexpected impact. This posting gener-
ated hundreds of valuable comments, a 95% upvoting,
and resulted in more than 40k views on ResearchGate
(in a single day). To answer all this constructive and
exciting feedback was impossible. I am sorry for that
and thank this community so very much.
To teach almost 200 students programming is
much work, involves much support, and is not a one-
person show. Especially the practical courses needed
excellent supervision and mentoring. Although all of
the following persons were (intentionally) not aware
of being part of this study, they all did a tremendous
job. Therefore, I have to thank the advisers of the
practical courses David Engelhardt, Thomas Hamer,
Clemens Stauner, Volker V
¨
olz, Patrick Willnow and
our student tutors Franz Bretterbauer, Francisco Car-
doso, Jannik Gramann, Till Hahn, Thorleif Harder,
Jan Steffen Krohn, Diana Meier, Jana Schwieger, Jake
Stradling, and Janos Vinz. As a team, they managed
13 groups with almost 200 students, and explained,
gave tips, and revealed plenty of their programming
tricks to a new generation of programmers.
REFERENCES
Ala-Mutka, K. M. (2005). A survey of automated assess-
ment approaches for programming assignments. Com-
puter Science Education, 15(2):83–102.
Alraimi, K. M., Zo, H., and Ciganek, A. P. (2015). Under-
standing the moocs continuance: The role of openness
and reputation. Computers & Education, 80:28 – 38.
Burrows, S., Tahaghoghi, S. M. M., and Zobel, J. (2007).
Efficient plagiarism detection for large code reposito-
ries. Softw., Pract. Exper., 37:151–175.
Caiza, J. C. and Alamo Ramiro, J. M. d. (2013). Automatic
Grading: Review of Tools and Implementations. In
Proc. of 7th Int. Technology, Education and Develop-
ment Conference (INTED2013).
Campbell, D. T. and Stanley, J. C. (2003). Experimental and
Quasi-experimental Designs for Research. Houghton
Mifflin Company. reprint.
del Pino, J. C. R., Rubio-Royo, E., and Hern
´
andez-
Figueroa, Z. J. A Virtual Programming Lab for Moo-
dle with automatic assessment and anti-plagiarism
features. In Proc. of the 2012 Int. Conf. on e-Learning,
e-Business, Enterprise Information Systems, and e-
Government.
Douce, C., Livingstone, D., and Orwell, J. (2005). Auto-
matic test-based assessment of programming: A re-
view. J. Educ. Resour. Comput., 5(3).
Gupta, S. and Gupta, B. B. (2017). Cross-site scripting
(xss) attacks and defense mechanisms: classification
and state-of-the-art. International Journal of System
Assurance Engineering and Management, 8(1):512–
530.
Halfond, W. G. J. and Orso, A. (2005). Amnesia: Analysis
and monitoring for neutralizing sql-injection attacks.
In Proceedings of the 20th IEEE/ACM International
Conference on Automated Software Engineering, ASE
’05, pages 174–183, New York, NY, USA. ACM.
Hunter, J. D. (2007). Matplotlib: A 2d graphics environ-
ment. Computing In Science & Engineering, 9(3):90–
95.
Ihantola, P., Ahoniemi, T., Karavirta, V., and Sepp
¨
al
¨
a, O.
(2010). Review of recent systems for automatic as-
sessment of programming assignments. In Proceed-
ings of the 10th Koli Calling International Conference
Smart Like a Fox: How Clever Students Trick Dumb Automated Programming Assignment Assessment Systems
25