in the LMS. The student answers are furthermore
roughly sorted from the most comprehensive and to
the least comprehensive answers, which reduces the
cognitive load of the teacher. This avoids having to
jump between poor and good students all the time.
The metric also has the inherently useful property
of providing implicit plagiarism detection, by listing
equal results beside each other, and similar results
typically close to each other. The tool furthermore
manages a list of previous comments given to stu-
dents, which increases the consistency of the mark-
ing.
Overall, this gives an increase in both marking
speed and precision, as well as a reduction in the
cognitive load of the teacher during marking, which
reduces the risk of fatigue and loss of focus during
marking.
9 FUTURE WORK
We should add support for an offline mode, so that
you can sync the results with ClassFronter afterwards,
if it is too heavily loaded. This will also reduce the
strain on the LMS when using the system.
Future work is also adding support for more com-
prehensive text analysis in order to understand the se-
mantics of the text being marked. This can for ex-
ample be done using LSA analysis and similar tech-
niques.
Another idea is supporting more advanced grading
schemes, such as assigning grades based on percentile
scores or even distribution scoring using both cur-
rently and previously marked results as well as grade
calibration, as suggested by (Sikora, 2015).
We may in the future consider writing a graphi-
cal user interface as an alternative to the current com-
mand line interface, as well as better integration with
the LMS. A comprehensive study for quantifying the
effect of using FrontScraper compared to alternative
methods is also left as future work. It would also be
interesting to evaluate how well FrontScraper works
for different subjects and disciplines. Another idea is
combining FrontScraper with peer-to-peer evaluation,
where the students could compare their own answer to
their peers as a rough check before submitting, in or-
der to get a reality orientation on own contribution.
This would inspire students to submit higher-quality
answers.
REFERENCES
Barstad, V., Goodwin, M., and Gjøsæter, T. (2014). Pre-
dicting Source Code Quality with Static Analysis and
Machine Learning.
Buckley, E. and Cowap, L. (2013). An evaluation of the use
of Turnitin for electronic submission and marking and
as a formative feedback tool from an educator’s per-
spective. British Journal of Educational Technology,
44(4):562–570.
Dretske, F. (1981). Knowledge and the Flow of Information.
MIT Press.
Dretske, F. (2000). Perception, Knowledge and Belief: Se-
lected Essays. Cambridge University Press.
Dretske, F. I. (1997). Naturalizing the Mind. MIT Press.
Foltz, P. W., Laham, D., Landauer, T. K., Foltz, P. W., La-
ham, D., and Landauer, T. K. (1999). Automated Es-
say Scoring: Applications to Educational Technology.
volume 1999, pages 939–944.
Kakkonen, T., Myller, N., Timonen, J., and Sutinen, E.
(2005). Automatic Essay Grading with Probabilistic
Latent Semantic Analysis. In Proceedings of the Sec-
ond Workshop on Building Educational Applications
Using NLP, EdAppsNLP 05, pages 29–36, Strouds-
burg, PA, USA. Association for Computational Lin-
guistics.
Rehder, B., Schreiner, M. E., Wolfe, M. B. W., Laham,
D., Landauer, T. K., and Kintsch, W. (1998). Using
latent semantic analysis to assess knowledge: Some
technical considerations. Discourse Processes, 25(2-
3):337–354.
Shannon, C. E. (1948). A Mathematical Theory of Commu-
nication. Bell System Technical Journal, 27:379–423,
623–656.
Sikora, A. S. (2015). Mathematical theory of student as-
sessment through grading.
Valenti, S., Neri, F., Cucchiarelli, A., Valenti, S., Neri, F.,
and Cucchiarelli, A. (2003). An Overview of Cur-
rent Research on Automated Essay Grading. Jour-
nal of Information Technology Education: Research,
2(1):319–330.
Zen, K., Iskandar, D., and Linang, O. (2011). Using La-
tent Semantic Analysis for automated grading pro-
gramming assignments. In 2011 International Con-
ference on Semantic Technology and Information Re-
trieval (STAIR), pages 82–88.
Streamlining Assessment using a Knowledge Metric
197