Peer Assessment and Knowledge Discovering in a Community of
Learners
Maria De Marsico
1
, Filippo Sciarrone
2
, Andrea Sterbini
1
and Marco Temperini
2
1
Dept. of Computer Science, Sapienza University,
Via Salaria, 113, 00189 Roma, Italy
2
Dept. of Computer, Control and Management Engineering, Sapienza University,
Via Ariosto, 25, 00184 Roma, Italy
Keywords:
Peer Assessment, Machine Learning, Student Modeling.
Abstract:
Thanks to the exponential growth of the Internet, Distance Education is becoming more and more strategic in
many fields of daily life. Its main advantage is t hat students can learn through appropriate web platforms that
allow them to take advantage of multimedia and interactive teaching materials, without constraints neither of
time nor of space. Today, in fact, the Internet offers many platforms suitable for this purpose, such as Moodle,
ATutor and others. Coursera is another example of a platform that offers different courses to thousands of
enrolled students. This approach to learning is, however, posing new problems such as that of the assessment
of the learning status of the learner in the case where there were thousands of students following a course,
as is in Massive On-line Courses (MOOC). The Peer Assessment can therefore be a solution to this problem:
evaluation takes place between peers, creating a dynamic in the community of learners that evolves autono-
mously. In this article, we present a first step towards this direction through a peer assessment mechanism led
by the teacher who intervenes by evaluating a very small part of the students. Through a mechanism based on
machine learning, and in particular on a modified form of K-NN, given the teacher’s grades, the system should
converge towards an evaluation that is as similar as possible to the one that t he teacher would have given. An
experiment is presented with encouraging results.
1 INTRODUCTION
Thanks to the exponential growth of the Internet th at
has occurred in recent years, many fields have c han-
ged or are ra dically changing their approach to trai-
ning. Today many distance courses ar e offered o n the
web, such as Coursera
1
and Khan Academy
2
, provi-
ded through appropriate tec hnology platforms availa-
ble 24 hours a day. This ap proach is proving a grea t
success fo r various and obvious reasons: first of all,
the user can manage his training time, without any
space or tim e restrictions. Moreover, w ith the ad ven t
of HTML5, the available teachin g materials can pre-
sent strong multimedia and interactive features, ma-
king learning even more enjoyable. Another impor-
tant aspect is that of communities of learning where
profession als, but also common people, propose lear-
ning paths. The number of participants m ust a lso be
taken into consideration: a specific university course
1
https://www.coursera.org
2
https://it.khanacademy.org/
can have 20 0 students using a platform while courses
like tho se proposed by Coursera can boast thousands.
These new scena rios pose new aspects a nd problems:
modern pedagogy is re-evaluating the th eory of soc ial
constructivism in which students also learn through
peer interactions (Vygotsky, 1962), while the aspect
of studen t assessment, with big numbers, requires a
re-think ing of the approach. It would be impossible
for a teacher to correct thousands of assignments. For
this reason, in recent years sof tware tools are be ing
developed for the automatic correction of open ans-
wers assignments. On the other hand, it is not always
possible to monitor progress in a learning path by
means of summa tive assessments by closed answers
(such as tests). The work that we present in this arti-
cle deals with this last aspect: a novel semi-automatic
method that helps the teacher to evaluate a community
of students for o pen answers assignments. In other
articles, we have already addressed this problem with
the OpenAnswer system, where a mecha nism o f co r-
rection of open-ended questions was proposed, with
Marsico, M., Sciarrone, F., Sterbini, A. and Temperini, M.
Peer Assessment and Knowledge Discovering in a Community of Learners.
DOI: 10.5220/0007229401190126
In Proceedings of the 10th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management (IC3K 2018) - Volume 1: KDIR, pages 119-126
ISBN: 978-989-758-330-8
Copyright © 2018 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
119
the support of the teacher. The mechanism was ba-
sed on Bayesian Networks (De Marsico et al., 2017a)
while in (De Marsico et al., 2017b) a first version of
a modified K-NN technique was presented. Here we
face th e same problem but with different variations
in the learning algorithms and in the student models.
First of all we enhance the Student Model (SM), ad-
ding another stocha stic variable, the Dev variable, re-
presenting the credibility of the Knowledge Level K.
Furthermore w e propose a more complete version of
the learning algorithms representing a modified ver-
sion of K-NN (Mitchell, 1997). Finally, a novel simu-
lation environme nt is used in order to simulate com-
munities of learners. In Section 2 we present a brief
review of the literature relevant to the work; in Section
3 the algorithms are shown. In the Section 4 we illus-
trate an experimen ta l evaluation in a simulated envi-
ronment and finally in the Section 5 the conclusions
and future developments are drown.
2 RELATED WORK
The literature offers many articles proposing Machine
Learning techniques and, more generally, Artificial
Intelligence algorithms for th e study of the dynamics
both of individuals and of communities of students
(Limongelli et al., 2008; Limonge lli et al., 2013; Li-
mongelli et al., 2015). Here we address some works
worth of mention for peer-assessment.
Peer-assessment (Kane a nd Lawler, 1978) is an
activity in which a student (or a group) is allowed
to evaluate othe r students assignments (and possibly
self-evaluate own assignments). It can be organized in
different ways, yet a basic aspect is that it can be con-
sidered as one of the activities in which social inte-
raction and collaboration among students can be trig-
gered. It can a lso serve as a way to verif y how the
teacher can communic a te to the students her own q ua-
lity requirements with respect to the learning to pics:
if this happens, a ssessments from peers and from tea-
cher agree better (Sadler and Good, 2006).
Student involvement in assessment typically takes
the form of peer assessment or self assessment. In
both of these activities, students are engaging with
criteria and stan dards, an d applying them to make
judgments. In self assessment, students judge their
own work, while in peer assessment they judge the
work of their peers (Falchikov and Goldfinch, 2000).
Peer assessment is grounded in philosophies of
active learning (Piaget, 1971) and androgogy (Cross,
1981), and may also be seen as being a manifestation
of social constructionism (Vygotsky, 1962 ), as it often
involves the joint construction of knowledge through
discourse.
A peer assessment system to be mentioned is the
proposal of (De Marsico et al., 20 17a) where the Ope-
nAnswer peer assessment system is presented. A peer
assessment engine , based on Bayesian networks, is
trained for the evaluation of open ended questions.
The system is based on the SM, composed by some
stochastic variables, such as the variable K represen-
ting the lear ner’s Knowledge Level, and the variable
J representing the lear ner’s ability to judge the ans-
wers of he r peers. Students initially grade n open-
ended exercises of their peers. Subsequently, the te-
acher grades m students. Each stude nt has therefore
associated a Conditional Probability Table that evol-
ves with time. This system has the same goal of our
system but is based on different mecha nisms. The
Bayesian network presents some aspects of complex-
ity that make the whole system a black box an d little
treatable for large numbe rs o f students, as in the case
of MOOCs, while our learning system has a much
lower complexity and do es n ot present pr oblems of
intractability. Another work (An son an d Goodman,
2014) proposes peer assessment to improve Student
Team Experiences. An online peer assessment sy-
stem and team improvement process was developed
based on three design criteria: efficient administra-
tion of the assessment, promotion of quality feedback,
and fostering effective team proce sses. In (Sterbini
and Temperini, 2012) the authors propose an a ppro-
ach to open answers grading, based on Constraint Lo-
gic Programming ( CLP) and peer a ssessment, where
students are modeled as triples of finite domain vari-
ables. The CLP Prolog module supported the genera-
tion of hypotheses of correctness for answer s (groun-
ded on students pee r-evaluation), and the assessment
of such hypotheses (also based on the answers already
graded by the teacher).
3 THE PEER ASSESSMENT
ENGINE
In this Section we show the algo rithms and the r a ti-
onale of our proposal. Here we present an enhanced
version of the e ngine presented in (De Marsico et al.,
2017b). The most important differences are: the ge-
neration of a simulatin g environment producing the
sample and different student model evolution taking
into account some community aspects. The inference
engine is based on a learning algorithm: K-NN. This
is a Lazy Learning approach (e.g. (Mitchell, 1997)),
also referred to as Instance Based learning: basically,
in order to learn better classifying elements, the algo-
rithm a dapts the classification to each further instance
KDIR 2018 - 10th International Conference on Knowledge Discovery and Information Retrieval
120
of the elements, that becomes part of the training set.
Each training instance is represented as a point in the
n-dimensional space of the instance attributes.
3.1 The Student Model
Each student is represented by a Student Model (SM),
SM {K, J, Dev, St}, composed by the following va-
riables:
K [1, 10]. Practica lly, it is the grade that the
teacher has assigned to her through the correction
of one or more structured open-ended exercises.
From a lear ning po int of view, it repr e sents the
learner’s comp etence (Knowledge level) about the
question domain
J [0, 1]. It is a measure of the learner’s assessing
capability (Judgement) and depends on K.
Standard Deviation Dev. it represents the credibi-
lity of the value o f K. The higher this value, the
less the value of K of the student is credible. Dev
is calculated as the standa rd deviation gener ated,
for e ach i-th learner as follows:
Dev
i
=
r
n
l=1
(K
i
K
l
)
2
n
(1)
being each K
l
one of the group of students that
graded her;
St {CORE, NO
CORE}. Each student can be in
two different states: CORE and NO CORE. Ini-
tially all the students are NO
CORE. If the stu-
dent is voted by the teacher then she becomes a
CORE student. These students, as we will see
later, are important for the dynamics of the n et-
work. Each NO
CORE student is represented as
s
while a CORE student is r e presented as s
+
.
Consequently, the community o f students is, at
any given moment, d ynamically parted into two
groups: the Core Group (CG), and its complement
CG. CG is composed by the students who se ans-
wers have been graded direc tly by the teacher: f or
them K is given (fixed). In the following we also
call this set as S
+
, and call its elemen ts the s
+
stu-
dents. On the contrary, S
is the set of stu dents
whose grade is to be in ferred (so, they h ave been
graded only by peers).
By this SM representation, each learner can be re-
presented as a point in a 2-dimensional space (K, J).
3.2 Students Model Initialization
First the each SM is initialized as follows:
The teacher assigns a n open-ended question to all
the students;
Each student provides an an swer;
Each student grades the answer s of n different
peers, and her answer receives n peer grades;
each s
l
student model, SM
l
= {K
l
, J
l
, Dev
l
, St
l
},
is initialized as follows:
K
l
=
n
i=1
K
i
n
(2)
where K
i
is the grade received by the i th of the
n peers who graded the s
l
student. In this way,
the K
l
value is initialized with the m ean of a ll re-
ceived grades. The rationale is that in this step we
do not know the differences among students’ true
assessment capabilities, and so we give to each of
them the same weight.
For each s
l
student, J
l
is initialized as follows:
J
l
=
1
1 +
q
n
i=1
i
2
(3)
2
i
= (K
l j
K
j
)
2
, being K
l j
the grade assigned by
the student s
l
to the stud ent s
j
and K
j
the arithme-
tic mean, i.e., the initial K
of the student s
j
, com-
puted by Eq. 2.
So, if a student grades her n peers with values al-
ways eq ual to their K
values, her J
value gets
maximal: J = 1 (here we haven’t teacher’s grades
available, so we have to do with the peer evaluati-
ons only).
All students are initialized to St = NO
CORE;
The a bove mentioned elements are of course stu-
dents, which we represent by a two-variables Student
Model (SM): K [1, 10] and J [0, 1]. K represents
her competence (Knowledge level) about the q uestion
domain; J is a measure of her assessing capability
(Judgement). By such attributes, each student is in
turn represented as a point in the (K, J) space.
Once the whole peer-evaluation has been comple-
ted, and no teacher’s grading has yet been p e rformed,
our module’s overall learning process starts with an
initialization step: the students’ SMs are initialized
basing solely on the peer-evaluation data. Then, the
learning process continues: at each following step ,
some answers from the S
students are gra ded by the
teacher, and consequently some students are extrac-
ted from S
and adde d to S
+
, a nd the SMs are reco m-
puted: in particular, at each step the positions of the
points representing S
students, in the (K, J) space,
do change, implying a new classification f or them,
which dep ends on their distance from points in S
+
,
accordin g to the K-NN protocol.
At each step the module learns to (hope fully) bet-
ter classifying the students in S
, until a termination
Peer Assessment and Knowledge Discovering in a Community of Learners
121
condition sugge sts to stop cycling, and the S
stu-
dents SMs become the gr ades finally inferred by the
module.
3.3 Students Model Evolution
After the SM initialization, all learners belong to the
S
set. Each learner evolves in the (K, J) space as
follows:
The teacher is suggested a r a nked list of stu-
dents/answers to grade , sorted by the Dev key.
The variable Dev, for ea ch learner is a very imp or-
tant variable because it represents the difference
between the knowledge level K and at a certain
moment and how mu c h this differs from the indi-
vidual evaluations given by the peers to the stu-
dent himself. A very high Dev mean s having a K
that is not very believable as the student has recei-
ved from his n peers ratings very different from
each other and then in this case a teacher’s inter-
vention on the value of K could be very positive.
The teac her selects a group of students/answers in
the ranked list, and grades them. Such grades are
the new, final, K
+
values for such students;
The graded students become s
+
students, and their
position in the (K, J) space changes;
A chain all peers who had voted for the student
who became s
+
change their model. The model
updating algorithm follows recursively a graph
path starting from the voted students and so on
backwards. For e ach learner, first K, and J are
updated . Once all the students influenced by the
teacher’s vote have be en updated, all their Dev up-
dated.
In the following we will use KMIN and KMAX
to denote the min imum and maximum values for K
(i.e. here respectively 1 and 10). IMAX will denote
the maximum difference betwee n two values of K, i.e.
here 9. Moreover JMIN and JMAX will denote the
minimum and maximum values for J (i.e. here resp.
0 and 1). Finally, Dev
min
and Dev
max
represent the
lowest and highest values for the variable Dev, i.e.,
DevMIN = 0 and DevMAX = 9.
The SM updating is explained in detail in th e next
paragra phs.
3.3.1 Updating of the Graded Learner
The graded lea rner SM is updated. First the K value
is updated :
K
+
= K
teacher
(4)
being K
teacher
the grade assigned by the teacher.
Secondly the J value:
J
+
new
= J
old
+ α(JMAX J
old
) (0 α 1)
J
+
new
= J
old
+ αJ
old
(α < 0)
α =
K
teacher
K
old
IMAX
(5)
Notice, in Eq.5:
1. A convex fun c tion has been adopted for J update,
providing the two cases according to the possible
value of α. In particular J
old
could stand for J
+
old
or J
old
, depending on the student being already in
S
+
(case J
+
old
), or being just entering in S
+
(case
J
old
) or remaining in S
(case J
old
again).
2. In general we assume that the assessment skill
of a student depends on her Knowledge Level K,
so the J value is a function of K. In the case
K
teacher
= K
old
, no change is implied for J. Also
notice that the difference K
teacher
K
old
is norma-
lized with respect to I
max
. If the student receives
a grade higher than her current one , we in c rease
her Judgement Level: the higher the level of kno-
wledge, the higher is her jud gment capability. Ot-
herwise J decreases. Equation 5 increases or de-
creases J by an amount such that its value always
remains in the r ange [0, 1]. Moreover, we used this
this type o f evolutionary form as it is the easiest to
treat as a first approach and also because it is used
very often in automatic learning as an up date of
statistical variables in a machine lear ning context
(see for example (Bishop, 2006)).
Subsequently the value of Dev is modified recal-
culating it on the student voted by the teacher, then
accordin g to the same rule used, that is the equation
1, i.e.:
Dev
new
=
r
n
l=1
(K
teacher
K
l
)
2
n
(6)
3.3.2 Other SMs Updating
Once the studen t who has been voted by the teacher
has changed his model, consequently the algorithm
recursively changes the models of all the students.
The students community, fro m the point of view of
the data structure that represents it, can be seen as a
weighted oriented graph where each node is a student
and the following rules apply:
Two n odes s
i
and s
j
are connected by a weighed
edge iff s
i
graded s
j
(s
i
s
j
) or s
j
graded s
i
(s
j
s
i
);
each edge is tagged with a weight w
i j
, represen-
ting the grade that the student s
i
gave s
j
;
KDIR 2018 - 10th International Conference on Knowledge Discovery and Information Retrieval
122
In Fig. 1 an example of the graph. So the alg o-
rithm recursively works on the adjacency matrix star-
ting from graded student. For each student (i.e. a
node), n ot a CORE student, the algorithm modifies
the SM. All the students, s
, who are influenced by
the graded student are modified, according to the fol-
lowing rules (students s
+
are fixed because graded by
the teacher):
K
new
= K
grading
+ α(KMAX K
graded
) (0 α 1)
K
new
= K
graded
+ αK
graded
(α < 0)
α =
1
IMAX
(K
grading
K
graded
)
Dev
grading
IMAX
(7)
where: K
new
is the new value of K of the interme-
diate stude nt (in Fig. 1it is the s1 node). The
Dev
grading
IMAX
factor expresses a kind of inertia o f the value of K
to change: the higher this value is, the more the va-
lue of K changes. The rationa le beh ind this choice
is that a student with a value of his own Dev high is
a studen t who has received very different grad e s from
those p eers who graded her and therefore is better that
it changes. each J value is changed as follows:
J
new
= J
grading
+ β(JMAX J
grading
) (0 β 1)
J
new
= J
grading
+ βJ
grading
(β < 0)
J
new
= J
grading
+ (K
grading
K
graded
)
(β = 0 J
grading
= J
graded
)
with :
β =
1
IMAX
(K
new
K
grading
)|J
grading
J
graded
|
Dev
grading
IMAX
(8)
After, in order to complete the SMs, all the Dev
variables are updated.
Figure 1: An extract of the graph. The teacher has voted
for the student s
k
. Starting from this student, the algorithm
dates back to changing the models of the students who voted
for it. First s
1
, then s
8
, s
11
, s
17
. Then it goes to s
5
and s
7
.
3.4 K-NN Network Evolution
Finally, after that the teacher has graded some s
stu-
dents, become s
+
students, the mo dified K-NN algo-
rithm ca n start. The learning process, is composed by
the following equations:
K
new
= K
old
+ α(KMAX K
old
) (0 α 1)
K
new
= K
old
+ α(1 K
old
) (α < 0)
α =
1
I
max
k
i=1
1
d
i
(K
+
i
K
old
)
k
i=1
1
d
i
Dev
i
IMAX
(9)
where:
1. d
i
is the Euclidean distanc e between the s
old
stu-
dent under update, and the i th student in the
Core Group (s
+
i
);
2. The K
new
value is given as a convex function, to
keep K in [1 , 10];
3. the acronym K-NN feature s a K, possibly misle-
ading here, so we are u sin g k for the numbe r of
nearest neighbors to be used in the learning algo-
rithm.
4. The
Dev
i
IMAX
factor has the sam e meaning of....
J
new
= J
old
+
(K
new
K
old
)
IMAX
J
old
(β = 0 J
+
i
= J
old
, i = 1 . . . k)
J
new
= J
old
+ β(JMAX J
old
) (0 β 1)
J
new
= J
old
+ βJ
old
(β < 0)
with β =
(K
new
K
old
)
IMAX
k
i=1
1
d
i
|J
+
i
J
old
|
k
i=1
1
d
i
Dev
i
IMAX
.
(10)
where:
1. As mentioned earlier, we assume J depending on
K: this is expressed through the difference bet-
ween the K
new
value, obtained by Equation 9, and
the K
old
value.
2. d
i
is the Euclidean distanc e between the s
old
stu-
dent under update, and the i th student in the
Core Group (s
+
i
);
3. The J
new
value is given as a convex function, to
keep J in its normal range [0, 1];
4. k is as explained in the previous equation.
5. About the coefficient β, so me notices are due, for
the cases when β = 0. On the one hand, when
the J
+
of the k nearest neighbors is equal to the
J
old
value of the s
i
student under update, J
new
is comp uted by the difference between K
new
and
K
old
only. The ratio nale is that when the s
stu-
dent changes h e r K
value, her assessment skill
Peer Assessment and Knowledge Discovering in a Community of Learners
123
should change as well (by the assumption of de-
pendence of J o n K). On the other hand, when the
K
value for the stud e nt un der upd a te is not chan-
ged, the assessment skill stays unchan ged as well.
4 EXPERIMENTAL EVALUATION
In th is Section we show an experimental evaluation of
the algorithms for the network dynamic. The goal of
this evaluation is to check the validity of the proposed
algorithm s, i.e., to show that, after some grades that
as the teacher directly votes the students, the network
modifies their models so as to converge all towards
the votes that would have given the teacher, obviously
with a certain gap. We built a software system where
to run our trials. In this way, a teacher should not
correct all the assignments but only a part of them,
consumin g less time.
The evaluation of such a system presents various
problems related to the sample of users as the propo-
sed algorithms have been designed to address com-
munity of students also formed by large n umbers as
in the MOOC jar where there may b e courses with
hundreds or even thousands of students. So , for a first
experimentation we created an environment that ge-
nerates sets of students fro m well-known and realistic
statistical distributions for the sector. For the grades
assigned by the teacher to the students w e referred to
a Gaussian distribution, generated with the statistical
environment R, while r egar ding the simulation of the
initial models of the students we referred to a uniform
distribution of the votes assigned among peers.
Histogram of x
x
Frequency
2 4 6 8 10
0 50 100 150 200
Figure 2: The distribution of the teacher grades for n=1000
students.
2 4 6 8 10
0.2 0.3 0.4 0.5 0.6 0.7 0.8
K
J
Figure 3: The distribution of the initial n=1000 SMs where
each student graded 3 peers.
Histogram of v
v
Frequency
0 1 2 3 4
0 50 100 150 200
Figure 4: The distribution of the initial Dev among peers.
Here we report o ur main trial performed with a
sample of n = 1000 stud e nts, In Fig. 4 the sample
distribution is shown in the (K, J) space while in Fig.
2 the teacher grading distribution shows the gaussian
shape of th e samp le . The experimental plan consists
of several run s of the learning algorithms until a fin al
condition is met. The final condition is that difference
between two consecutive variations of the network is
below a small pre-set quantity. So, the experimental
plan is composed by the following steps:
1. A samp le of n = 1000 students is generated with
a uniform distribution in peer assessments. The c
rand() function was used;
2. The teacher selects n (in our case n=3) students to
KDIR 2018 - 10th International Conference on Knowledge Discovery and Information Retrieval
124
Table 1: The ranked list of the generated sample: on top the
highest Dev values.
St-ID K J Dev St
997 4 0,478 4,24 NO CORE
998 7 0,21 4,24 NO CORE
999 7 0,38 4,24 NO CORE
. . . . . . . . . . . .
723 6,7 0,42 2,86 NO CORE
724 4,7 0,26 2,86 NO CORE
725 4,3 0,38 2,86 NO CORE
Table 2: A comparison between the grades distributions.
µ σ
Teacher 5,51 2,8
Students 6,43 1,66
Table 3: A comparison between the grades distributions.
µ σ
Teacher 5,51 2,8
Students 6,02 1,46
grade from the ranked list;
3. All the SMs are updated according to the algo -
rithms shown in SEct. 3;
4. The K-NN algorithm is launched;
5. The new statistical general p arameter are compu-
ted.
The steps 2-4 are launched several times, until the fi-
nal condition is met.
After 4 K-NN runs and 8 teacher grades, we obtai-
ned the results shown in Tab. 3. The teacher gave
a 5.51 mea n grade, with σ = 2, 8 while the peers a
more generous 6.43 with σ = 1, 66. The initialization
of the system started from a mean µ = 6.43, then de-
veloped to 6.02 after the k-NN step s. One key po int,
in our opinion, is in the standard deviation of the as-
sessments, which is diminishing with the k-NN step.
This seems enc ouragin g, as it suggests that the fra-
mework c a n improve o n the pure peer-evaluation, and
also produce more stable assessment distributions.
5 CONCLUSIONS AND FUTURE
WORK
In this article we pre sented a peer assessment system
based on a modified version of the K-NN alg orithm.
We have included the rand om generation of SMs dis-
tribution and some changes to the formulas at the base
of the stude nt model’s evolution, i.e., learning. The
experimental results are e ncouraging: the system cold
help teachers to mana ge big numbers of students. As
future developments we p la n to expand the possibi-
lity of simulating students with other statistical distri-
butions and then calibr a ting the lear ning mechanism.
Another perspective regarding futur e developmen ts
concerns the possibility of making the student com-
munity evolve autonomously without the teacher’s in-
tervention, but based only on social network analysis.
REFERENCES
Anson, R. and Goodman, J. A. (2014). A peer assessment
system to improve student team experiences. Journal
of Education for Business, 89(1):27–34.
Bishop, C . M. (2006). Pattern Recognition and Machine Le-
arning (Information Science and Statistics). Springer-
Verlag, Berlin, Heidelberg.
Cross, K. P. (1981). Adults as Learners. Jossey-Bass, San
Francisco.
De Marsico, M., Sciarrone, F., Sterbini, A., and Temperini,
M. (2017a). The impact of self- and peer-grading on
student learning. EURASIA Journal of Mathematics,
Science and Technology Education, 13(4):1085–1106.
De Marsico, M., Sterbini, A., Sciarrone, F., and Temperini,
M. (2017b). Modeling a peer assessment framework
by means of a lazy learning approach. I n Huang, T.-C.,
Lau, R., Huang, Y.-M., Spaniol, M., and Yuen, C.-H.,
editors, Emerging Technologies for Education, pages
336–345, Cham. Springer International Publishing.
Falchikov, N. and Goldfinch, J. (2000). Student peer asses-
sment in higher education: A meta-analysis compa-
ring peer and teacher marks. Review of Educational
Research, 70(3):287–322.
Kane, L. S. and Lawler, E. E. (1978). Methods of peer as-
sessment. Psych. Bull, 85:555–586.
Limongelli, C., Lombardi, M., Marani, A., and Sciarrone,
F. (2013). A teacher model to speed up the process
of building courses. In Human-Computer Interaction.
Applications and Services - 15th International Confe-
rence, HCI International 2013, Las Vegas, NV, USA,
July 21-26, 2013, Proceedings, Part II, pages 434–
443.
Limongelli, C., Sciarrone, F., and Temperini, M. (2015). A
social network-based teacher model to support course
construction. Computers in Human Behavior. Arti cle
in Press.
Limongelli, C., Sciarrone, F., and Vaste, G. (2008). Ls-plan:
An effective combination of dynamic courseware ge-
neration and learning styles in web-based education.
In Proc. AH’08: Adaptive Hypermedia and Adaptive
Web-based Systems, volume 5149 LNCS, pages 133–
142.
Mitchell, T. M. (1997). Machine Learning. David McKay,
New York, NY, USA, 1 edition.
Piaget, J. (1971). Science of education and the psychology
of the child. Longman, London.
Peer Assessment and Knowledge Discovering in a Community of Learners
125
Sadler, P. and Good, E. (2006). The impact of self- and peer-
grading on student learning. Educational assessment,
11(1):1–31.
Sterbini, A. and Temperini, M. (2012). Dealing with open-
answer questions in a peer-assessment environment.
In Proc. ICWL 2012. LNCS, vol. 7558, page 240248.
Vygotsky, L. S. (1962). Thought and Language. MA: MIT
Press, Cambridge.
KDIR 2018 - 10th International Conference on Knowledge Discovery and Information Retrieval
126