Authors:
Maria De Marsico
;
Andrea Sterbini
and
Marco Temperini
Affiliation:
Sapienza University, Italy
Keyword(s):
Peer Assessment, OpenAnswer Questions, Automatic Grade Prediction.
Related
Ontology
Subjects/Areas/Topics:
Computer-Supported Education
;
Learning/Teaching Methodologies and Assessment
Abstract:
In this paper we experimentally investigate the influence of several factors on the final performance of an automatic grade prediction system based on teacher-mediated peer assessment. Experiments are carried out by OpenAnswer, a system designed for peer assessment of open-ended questions. It exploits a Bayesian Network to model the students’ learning state and the propagation of information injected in the system by peer grades and by a (partial) grading from the teacher. The relevant variables are characterized by a probability distribution (PD) of their discrete values. We aim at analysing the influence of the initial set up of the PD of these variables on the ability of the system to predict a reliable grade for answers not yet graded by the teacher. We investigate here the influence of the initial choice of the PD for the student’s knowledge (K), especially when we have no information on the class proficiency on the examined skills, and of the PD of the correctness of student’s
answers, conditioned by her knowledge, P(C|K). The latter is expressed through different Conditional Probability Tables (CPTs), in turn, to identify the one allowing to achieve the best final results. Moreover we test different strategies to map the final PD for the correctness (C) of an answer, namely the grade that will be returned to the student, onto a single discrete value.
(More)