the support of the teacher. The mechanism was ba-
sed on Bayesian Networks (De Marsico et al., 2017a)
while in (De Marsico et al., 2017b) a first version of
a modified K-NN technique was presented. Here we
face th e same problem but with different variations
in the learning algorithms and in the student models.
First of all we enhance the Student Model (SM), ad-
ding another stocha stic variable, the Dev variable, re-
presenting the credibility of the Knowledge Level K.
Furthermore w e propose a more complete version of
the learning algorithms representing a modified ver-
sion of K-NN (Mitchell, 1997). Finally, a novel simu-
lation environme nt is used in order to simulate com-
munities of learners. In Section 2 we present a brief
review of the literature relevant to the work; in Section
3 the algorithms are shown. In the Section 4 we illus-
trate an experimen ta l evaluation in a simulated envi-
ronment and finally in the Section 5 the conclusions
and future developments are drown.
2 RELATED WORK
The literature offers many articles proposing Machine
Learning techniques and, more generally, Artificial
Intelligence algorithms for th e study of the dynamics
both of individuals and of communities of students
(Limongelli et al., 2008; Limonge lli et al., 2013; Li-
mongelli et al., 2015). Here we address some works
worth of mention for peer-assessment.
Peer-assessment (Kane a nd Lawler, 1978) is an
activity in which a student (or a group) is allowed
to evaluate othe r students assignments (and possibly
self-evaluate own assignments). It can be organized in
different ways, yet a basic aspect is that it can be con-
sidered as one of the activities in which social inte-
raction and collaboration among students can be trig-
gered. It can a lso serve as a way to verif y how the
teacher can communic a te to the students her own q ua-
lity requirements with respect to the learning to pics:
if this happens, a ssessments from peers and from tea-
cher agree better (Sadler and Good, 2006).
Student involvement in assessment typically takes
the form of peer assessment or self assessment. In
both of these activities, students are engaging with
criteria and stan dards, an d applying them to make
judgments. In self assessment, students judge their
own work, while in peer assessment they judge the
work of their peers (Falchikov and Goldfinch, 2000).
Peer assessment is grounded in philosophies of
active learning (Piaget, 1971) and androgogy (Cross,
1981), and may also be seen as being a manifestation
of social constructionism (Vygotsky, 1962 ), as it often
involves the joint construction of knowledge through
discourse.
A peer assessment system to be mentioned is the
proposal of (De Marsico et al., 20 17a) where the Ope-
nAnswer peer assessment system is presented. A peer
assessment engine , based on Bayesian networks, is
trained for the evaluation of open ended questions.
The system is based on the SM, composed by some
stochastic variables, such as the variable K represen-
ting the lear ner’s Knowledge Level, and the variable
J representing the lear ner’s ability to judge the ans-
wers of he r peers. Students initially grade n open-
ended exercises of their peers. Subsequently, the te-
acher grades m students. Each stude nt has therefore
associated a Conditional Probability Table that evol-
ves with time. This system has the same goal of our
system but is based on different mecha nisms. The
Bayesian network presents some aspects of complex-
ity that make the whole system a black box an d little
treatable for large numbe rs o f students, as in the case
of MOOCs, while our learning system has a much
lower complexity and do es n ot present pr oblems of
intractability. Another work (An son an d Goodman,
2014) proposes peer assessment to improve Student
Team Experiences. An online peer assessment sy-
stem and team improvement process was developed
based on three design criteria: efficient administra-
tion of the assessment, promotion of quality feedback,
and fostering effective team proce sses. In (Sterbini
and Temperini, 2012) the authors propose an a ppro-
ach to open answers grading, based on Constraint Lo-
gic Programming ( CLP) and peer a ssessment, where
students are modeled as triples of finite domain vari-
ables. The CLP Prolog module supported the genera-
tion of hypotheses of correctness for answer s (groun-
ded on students pee r-evaluation), and the assessment
of such hypotheses (also based on the answers already
graded by the teacher).
3 THE PEER ASSESSMENT
ENGINE
In this Section we show the algo rithms and the r a ti-
onale of our proposal. Here we present an enhanced
version of the e ngine presented in (De Marsico et al.,
2017b). The most important differences are: the ge-
neration of a simulatin g environment producing the
sample and different student model evolution taking
into account some community aspects. The inference
engine is based on a learning algorithm: K-NN. This
is a Lazy Learning approach (e.g. (Mitchell, 1997)),
also referred to as Instance Based learning: basically,
in order to learn better classifying elements, the algo-
rithm a dapts the classification to each further instance