Authors:
Štefan Pero
and
Tomáš Horváth
Affiliation:
Faculty of Science and University of Pavol Jozef Šafárik, Slovak Republic
Keyword(s):
Grading, Student Assessment, Inconsistent Evaluation, Textual Evaluation, Personalization.
Related
Ontology
Subjects/Areas/Topics:
Artificial Intelligence and Decision Support Systems
;
Assessment Software Tools
;
Computer-Aided Assessment
;
Computer-Supported Education
;
e-Learning
;
e-Learning Platforms
;
Enterprise Information Systems
;
Information Technologies Supporting Learning
;
Intelligent Tutoring Systems
;
Learning/Teaching Methodologies and Assessment
;
Metrics and Performance Measurement
;
Pedagogy Enhancement with e-Learning
;
Simulation and Modeling
;
Simulation Tools and Platforms
Abstract:
Evaluation of the solutions for the tasks or projects solved by students is a complex process driven mainly by the subjective evaluation criteria of a given teacher. Each teacher is somehow biased meaning how strict she is in assessing grades to solutions. Besides the teacher’s bias there are also some other factors contributing to grading, for example, teachers can make mistakes, the grading scale is too rough-grained or too fine-grained, etc. Grades are often provided together with teacher’s textual evaluations which are considered to be more expressive as a single number. Such textual evaluations, however, should be consistent with grades, meaning that if two solutions have very similar textual evaluations their grades should be also very similar. Though, some inconsistencies between textual evaluations and grades provided by the teacher used to arise, especially, when a teacher has to assess a large number of solutions, or if more than one teacher is involved in the evaluation pr
ocess. We propose a simple approach for detection of inconsistencies between textual evaluations and grades in this paper. Experiments are provided on two real-world datasets collected from the teaching process at our university.
(More)