GENERATION OF USEFUL SEMANTIC FEEDBACK FOR
STUDENTS AND TEACHERS
M. Sánchez-Vera, F. Frutos-Morales, M. D. Castellanos-Nieves
M. P. Prendes-Espinosa, J. T. Fernández-Breis
Universidad de Murcia, Campus de Espinardo CP30100, Spain
C. Cruz-Corona
Universidad de Granada, CP 18015, Spain
Keywords: Ontology, Feedback, eLearning.
Abstract: Feedback is an important component of assessment in learning environments, because it allows students to
know their learning flaws, and feedback information is also useful for teachers to design learning contents
adapted to the needs of the students. Therefore, the availability of feedback constitutes a new learning
opportunity. In this paper we describe an approach based on Semantic Web technologies for generating
useful semantic feedback for both teachers and students.
1 INTRODUCTION
Student's knowledge or skills evaluation is a basic
activity in both conventional education and e-
learning. To date, different knowledge
representation techniques have been used in
computer-assisted open question assessment, such as
semantic networks or lexical conceptual structures
(see for instance (Whittingdon and Hunt, 1999)).
These ones have made use of complementary
techniques, including statistical techniques, natural-
language processing, information extraction
techniques, clustering, and Hybrid approaches. In
the last years, Topic Maps (Maicher and Park, 2005)
have been widely used for conceptualizing domains
in educative settings. They can represent information
using topics, relationships, and occurrences. They
are thus similar to semantic networks and to both
concept and mind maps in many aspects. However,
their knowledge is not formalized and requires the
definition of the topic map ontology. Non semantic
approaches can also be found in literature. For
instance, fuzzy sets have also been used for
evaluating performance in eLearning settings (see
for instance (Wang and Chen, 2008)).
On the other hand, the Semantic Web (Berners-
Lee et al., 2001) proposes the idea that web contents
are defined and linked not only for visualization but
for being used by applications. Moreover, Semantic
Web technologies have been used in eLearning for
the last years from different perspectives (see, for
instance, (Devedzic, 2006; Fensel et al., 2003;
Stojanovic et al., 2001). In this way, our research
group developed the OeLE platform to support
teachers in the assessment of open questions-based
exams by applying such technologies (Castellanos et
al., 2008). This assessment approach demonstrated
its usefulness in real courses for supporting exam
marking. However, it did not allow students and
teacher to know the main flaws of the students from
the course knowledge perspective.
Feedback is indeed an important part of
assessment processes since they allow both teachers
and students to take actions to overcome the learning
flaws demonstrated in the assessment tests.
Furthermore, its availability is a new learning
opportunity, so enhancing the learning-teaching
process. Hence, in this work we address the
generation of semantic feedback for both agents of
the learning-teaching process. The OeLE platform
will be then extended for providing such feedback.
As a result, students will not only receive the mark
for the exam, but also their learning flaws. On the
other hand, teachers will know the strengths and
221
Sánchez-Vera M., Frutos-Morales F., D. Castellanos-Nieves M., P. Prendes-Espinosa M., T. Fernández-Breis J. and Cruz-Corona C. (2010).
GENERATION OF USEFUL SEMANTIC FEEDBACK FOR STUDENTS AND TEACHERS.
In Proceedings of the 2nd International Conference on Computer Supported Education, pages 221-226
DOI: 10.5220/0002780002210226
Copyright
c
SciTePress
weaknesses of their students by doing the semantic
analysis of the results of the exams.
2 ASSESSMENT IN OeLE
OeLe is an approach based on Semantic Web
technologies for supporting teachers the assessment
of exams. The whole picture of assessment in OeLE
can be seen in Figure 1, and includes the following
basic assessment entities:
Course ontology: It models the knowledge of
the course, and it must be written in OWL.
Annotated exam: An exam is comprised of a set
of open and closed questions. Each open
question has a set of semantic annotations
associated by the teacher, which constitutes the
expected answer to that question.
Annotated student’s response to the exam:
Semantic annotations are extracted from the
response to each open question. This is a
semiautomatic process that follows the
algorithm presented in (Valencia et al., 2004).
Each semantic annotation consists in associating one
or more elements of the course ontology to the
question or to part of the student answer. Once the
annotations have been obtained, OeLE gets
automatically the marks for each question using the
functions presented in (Castellanos et al., 2008).
Figure 1: Assessment in OeLE.
As shown in Figure 1, feedback is then approached
as the information received by both teachers and
students after an exam is marked. Feedback is then
obtained analysing the performance of the students
in that exam. The generation of feedback is strongly
related to the marking approach, and it is obtained
by analyzing the same sets of annotations. As a
result of this process, both students and teachers will
receive information about the knowledge the
students do not have learnt, and will be able to
perform actions to overcome such learning flaws.
3 FEEDBACK IN OeLE
In this section we will present how feedback is
represented in OeLE, how it is generated and,
finally, we will describe the particular feedback
information generated for teachers and students.
3.1 Representing Feedback
The relevant concepts managed by OeLE to
represent feedback are presented in this subsection.
Definition 1. Open Question
open_question: <desc, expected_answer,
{open_question_annot
i
} ,value>
where desc is the name of the question;
expected_answer contains the correct answer to the
question in natural language; open_question_annot
i
are the semantic annotations defined for such open
question; and finally, value is the number of units
given to the student in case of success.
When an open question is created by a teacher,
its expected response must be annotated with respect
to the course ontology. For this purpose, each open
question has a set of (concepts, relations, attributes
and values) annotations associated.
Definition 2. Open Question Annotation
open_question_annot:<entity_annot,
quantitative_value>
where entity_annot represents the annotation for the
knowledge entity in the course ontology; and
quantitative_value is the numerical score associated
to the question. It stands for the importance of the
knowledge entity in the context of the individual
question.
There are three types of entity_annot, for
concepts, relations and attributes, which associates
the particular knowledge item of the course ontology
to the question.
Definition 3. Open Question Answer
open_question_answer= <text_answer,
{answer_annotation
i
}>
where text_answer is the answer of the student in
natural language; and answer_annotation
i
are the
CSEDU 2010 - 2nd International Conference on Computer Supported Education
222
semantic annotations obtained from the textual
answer, which are defined next.
Definition 4. Answer Annotation
Answer_annotation=< entity_annot, ling_exp>
where entity_annot is defined as for
Question_Annotation; and ling_exp represents the
text of the answer associated to the knowledge
entity.
The generation of feedback requires the
definition of new elements, which are defined next.
Definition 5. Feedback Annotation
Feedback_annotation= X, where X in {correct,
wrong}
A feedback annotation has the value wrong in
case the answer annotation is not similar enough to
any annotation of the same open question, what is
determined by the value of the similarity threshold
used. Otherwise, the value is correct.
Once we know which answer annotations are
correct and which are wrong, positive and negative
feedback structures are respectively defined.
Definition 6. Positive Feedback for an Answer
positive_feedback(op)=
{ answer_annotation
i
, feedback_annotation
i
}such
that feedback_annotation
i
=correct.
Definition 7. Negative Feedback for an Answer
negative_feedback (op)=
{ answer_annotation
i
, feedback_annotation
i
} such
that feedback_annotation
i
=wrong.
Then, the combination of both definitions would
provide the definition of the feedback provided to a
student for a particular answer to an open question.
Definition 8. Student Feedback for an Answer
student_feedback(op)=
<text_answer, positive_feedback(op),
negative_feedback(op)>
Finally, we can define the feedback generated for a
teacher for a particular open question:
Definition 9. Teacher Feedback for an Open
Question
teacher_feedback(op)=
_

3.2 Obtaining Feedback
The algorithm for feedback generation works on a
question-by-question basis and feedback items are
generated in parallel to the calculation of the
marking score. Next, we describe how feedback
annotations are generated for a particular answer of
an open question.
For each semantic annotation of the student’s
answer, the following process is executed. First, the
semantic similarity between one semantic annotation
of the student’s answer and all the annotations of the
expected one of the same ontological category is
obtained. The result of this process is a table whose
rows are the annotations of the student’s answer, and
whose columns are the annotations of the expected
one. Each cell has then the value of the semantic
similarity.
For each annotation of the expected answer, the
most similar annotation of the student is selected. If
such similarity is higher than the threshold, it is
marked as correct and included in the positive
feedback group. Otherwise, it is marked as wrong
and included in the negative group. It should be
noticed that the algorithm checks that we can only
select one item of the student’s answer for one item
of the expected one.
3.3 Customised Feedback
Feedback has been incorporated in the OeLE
platform for both teachers and students. The OeLE
platform offers a desktop application for teachers
and a web-based access for students, so the
corresponding software artifacts had to be modified
appropriately. This section has then two streams, one
per type of agent involved in the teaching-learning
process: teacher and student.
3.3.1 Providing Feedback to Teachers
The OeLE platform allows for making several
corrections to the same exam by changing the
assessment parameters. Teachers can receive the
feedback of any of such marking processes, since
each exam has a marking configuration associated.
Hence, once the teacher selects the desired exam, the
analysis dialog, shown in Figure 2, is displayed.
The upper part of the dialog contains general
information about the exam, showing some statistics
such as mean, standard deviation, highest and lowest
scores, and the description of the marking criterion
used (“Calificación estricta”). This description is
provided by the teacher when the criterion is created.
The lower part of the dialog provides the semantic
interpretation of the exams, using the course
ontology to perform such analysis. This analysis
calculates how many students have answered
correctly each ontological entity associated to the
questions, and how many have done it wrong. To
GENERATION OF USEFUL SEMANTIC FEEDBACK FOR STUDENTS AND TEACHERS
223
this end, such entities are classified into two sets: a)
entities acquired by the students (“aspectos mejor
adquiridos”); and b) entities not acquired by the
students (“aspectos peor adquiridos”). Both sets are
shown in the lower part of the dialog. Hence, the
teacher has access to which concepts, relations and
attributes have been acquired better or worse by the
students, although in the figure only concepts are
shown. In the example shown in the figure, the
concept interactivity (“interactividad”) has been
correctly answered by all the students, whereas
simple design (“diseño simple”) has only been
wrongly answered by 53% of the students.
Figure 2: Providing feedback to teachers in OeLE.
Graphical feedback is also generated for the teacher.
In order to get the graphical feedback, the teacher
has to select the set of entities to analyze
graphically, and the graph is generated. The OeLE
platform generates bar and circular graphs for
teachers. Bar graphs allow for viewing the selected
course knowledge items ordered by decreasing
percentage, whereas circular ones allow for
representing and analyzing course knowledge items
in relative terms.
3.3.2 Providing Feedback to Students
It has already been mentioned that the teachers can
launch several marking processes for the same exam
by changing the marking criteria. However, the
students can only receive the mark and the feedback
from one of them. This will be the one made public
by the teacher. Consequently, the students receive
feedback for this public assessment. This feedback is
generated for each question of the exam. The
feedback for open questions is different than the
closed ones. The feedback for closed questions does
not provide any semantic information; the platform
just shows the user answer and the correct one.
An example of open question feedback is shown in
Figure 3, which shows part of the feedback
generated for the first question of an exam. The
student can see the description of the question, the
score obtained for this question (0.29), the expected
answer in natural language, and the semantic
analysis of his/her answer. The result of such
analysis is comprised of two lists:
- Knowledge not acquired (“aspectos a
mejorar”): This list contains the knowledge items
that were expected to be answered in this question,
but the student did not. In this example, the student
did not answer the concepts “bases of design”,
“phases of design” and “recommendations”, the
relations “bases of design are the bases of
pedagogical design” and also “bases of design are
the bases of technical design”, and, finally, the
attribute “main aspects of the bases of design”.
- Knowledge contained in the answer (“Items
respondidos por el alumno”): The marking process
obtains a set of semantic annotations from the
student answer. The feedback is then generated by
showing the correctness of each ontological entity
extracted from the student answer. The figure only
shows the concept Tools (“herramientas”) which
was correctly answered by the student. Wrong items
have a red cross associated.
Figure 3: The feedback generated for a student.
Moreover, the ontological elements have links
associated, which allow the student to see its
ontological definition. For instance, in case of
clicking on a concept, a web page showing its name,
attributes and relations is shown. In summary, the
feedback provided to each student can be seen as a
personalized recommendation of topics that should
be reinforced.
CSEDU 2010 - 2nd International Conference on Computer Supported Education
224
4 EXPERIMENTAL VALIDATION
The course “Design and Production of Educational
Materials” is one of the e-learning courses in the
Education Degree in the University of Murcia. This
course took place in the second semester of
2008/2009, and had 25 students. All the work is
realized in our virtual campus SUMA
(http://suma.um.es/). The working processes of the
students are evaluated with an e-portfolio and other
different activities throughout the 9 themes of the
program and also the participation of the students in
several communication situations (videoconferences,
forum and collaborative works). The final evaluation
is carried out with two types of exams: multiple
choice, and open question test. For this second
phase, OeLE was used and it served for the
validation of the approach.
Now, we describe the process followed in this
validation experiment:
1) Development of the course ontology. The OWL
ontology has been done using Protégé, and it
has been imported into the OeLE platform. The
ontology has 111 classes, 71 object properties,
51 data type properties, including also disjoint
and cardinality constraints. Its consistency has
been checked using Pellet, and the ontology has
ALCHIN(D) DL expressivity.
2) Preparation of reinforcement contents: A series
of HTML learning objects were designed and
associated to the concepts of the course
ontology.
3) Design of the first exam: An exam containing 5
open questions was created using OeLE, and the
expected answers were annotated.
4) Execution of the exam: The students had to
answer this test using OeLE and with a time
limit. The students could review the contents of
the course in the virtual environment and could
search on internet to find answers during the
realization of the exam. This exam was taken by
21 students.
5) Assessment of the exams: The exams were
marked by a teacher and by OeLE.
6) Feedback: The students and the teacher received
the marks and the feedback generated by OeLE.
The students reviewed the reinforcement
learning objects associated to the knowledge
items suggested by OeLE.
7) Repetition of steps 3, 4 and 5 for the second
exam. This exam was taken by 20 students.
8) Evaluation of the feedback: This was done by
the students. Students were asked to answer a
questionnaire about the effectiveness and
usefulness of the learning objects and the
feedback received.
The whole experiment and the results can be found
at klt.inf.um.es/~oele/feedback. This includes: the
ontology, the questions of the exams, the annotations
of their expected answers, the reinforcement
contents, samples of the annotations extracted from
the students’ answers, samples of the semantic
feedback generated by OeLE for the teacher and for
a particular student, the marks of the students in both
exams, and the questionnaire filled by the students.
Next, some evaluation of the feedback results is
shown.
First, we compared the results obtained by the
students in both exams. If the feedback generated by
the system was effective, then the students should
have obtained a better mark in the second exam. The
maximum possible score in an exam is 10. The
average mark of the first exam was 6.18 (21
students)/6.12 (20 students) and the average mark
for the second one was 6.56 (20 students).12
students obtained a better mark, 4 obtained a worse
mark and 4 obtained a similar mark. For this
classification, we defined that a student obtained a
similar mark is the difference was not greater than
0.25. Consequently, it seems that the feedback
generated was useful for the students. However, this
is a single, and small experiment so strong
conclusions cannot be drawn from such results.
Therefore, we asked the students to answer a
questionnaire. This was designed from a pedagogical
perspective and it included questions related to
different issues such as usability, accessibility,
quality of the learning objects and usefulness of the
feedback.
Next, we discuss the results of the three
questions related to the feedback. A Likert scale was
used for answering to the questions. In this sense,
the students had to assign a value between
1(maximum disagreement) and 4(maximum
agreement). In order to summarize the results, two
groups were created: agreement (3-4) and
disagreement (1-2). The detailed results can be
checked at the aforementioned website. These are
the three questions:
Question 1: Knowing the errors made in my
exam is a waste of time.
o Agreement: 16.7%
o Disagreement: 83.3%
Question 2: Showing the feedback information
about the errors in my exam is positive.
GENERATION OF USEFUL SEMANTIC FEEDBACK FOR STUDENTS AND TEACHERS
225
o Agreement:88.9%
o Disagreement: 5,5%
o Do not answered: 5,6%
Question 3: I think I would have obtained the
same mark in the second exam without the
feedback generated by the system.
o Agreement: 22,2%
o Disagreement: 66,67%
o Do not answered: 11.1%
Consequently, it can be said that the students
found useful and effective the generation of the
semantic feedback and that this can be useful for
helping students in improving their academic
performance.
5 CONCLUSIONS
Assessment is a fundamental part of the teaching-
learning process. Feedback is an important
component of assessment, since it is the process
through which students and teachers can get precise
information about the learning flaws of the students
and then take effective actions. However, most
current eLearning systems does not offer
possibilities for providing feedback, and in most
cases, they only provide a numeric score for the
closed questions.
In this work, mechanisms for providing feedback
based on Semantic Web technologies have been
proposed, and they have been implemented in an
existing software platform, with the aim of
facilitating continuous learning processes and
reducing the workload of teachers in these tasks.
Feedback has been generated by analyzing the
semantic annotations associated to the expected
answer of questions, and to the answer of the
students. On the teacher side, information about the
weaknesses of the student is provided, so teachers
can design new materials, schedule extra lessons, or
extra exercises for students to overcome their
learning flaws. On the other hand, students are
provided with the list of knowledge items that have
been correctly and wrongly answered, so that they
know what they have to reinforce.
Here, semantic feedback is provided for open
questions. We plan to redefine closed questions so
that they will also have semantic annotations
associated, that will be used to generate the feedback
to the student. As further work, we will provide links
to the learning objects associated to the ontological
elements.
ACKNOWLEDGEMENTS
This work has been possible thanks to the Seneca
Foundation, through Project 08756/PI/08, and the
Regional Government of Murcia, through project
TIC-INF 07/01-0001. María del Mar Sánchez Vera
is supported by the Spanish Ministry for Science and
Innovation through the FPU Program.
REFERENCES
Berners-Lee, T., Hendler, J., Lassila, O., May (2001). The
Semantic Web. Scientific American 284 (5), 34-43.
Castellanos-Nieves, D., Fernandez-Breis, J.T., Valencia-
Garcia, R., Cruz, C., Prendes-Espinosa, M.P.
,Martinez-Bejar, R. (2008). Using Semantic Web
Technologies for the Assessment of Open Questions.
Lecture Notes on Artificial Intelligence 5351, 42-53.
Devedzic, V., (2006). Semantic Web and Education.
Springer.
Fensel, D., Staab, S., Studer, R., van Harmelen, F., Davies,
J., (2003). Towards the Semantic Web. John Wiley
and Sons, Ltd, Ch. A Future Perspective: Exploiting
Peer-to-Peer and the Semantic Web for Knowledge
Management, 245-264.
Maicher, L., Park, J. (Eds.), (2005). Charting the Topic
Maps Research and Applications Landscape. Springer.
Stojanovic, L., Staab, S., Studer, R., (2001). elearning
based on the semantic web. In Proceedings of WebNet
2001.
Valencia-García, R., Ruiz-Sánchez, J. M., Vivancos-
Vicente, P. J. , Fernández-Breis, J. T., Martínez-Béjar,
R., (2004). An incremental approach for discovering
medical knowledge from texts. Expert Systems with
Applications 26 (3), 291-299.
Wang, H., Chen, S., (2008). Evaluating students'
answerscripts using vague values. Applied Intelligence
28, 183-193.
Whittingdon, D., Hunt, H., (1999). Approaches to the
computerised assessment of free-text responses. In
Proceedings of 3rd International Computer Assisted
Assessment Conference.
CSEDU 2010 - 2nd International Conference on Computer Supported Education
226