the solutions by email to their teacher. Unfortunately,
with this approach the possibilities that an online en-
vironmentshould offer would be wasted; forexample,
if a student could receive immediate feedback after
submitting a test on-line, he/she could direct his/her
own learning in a better and more efficient way. The
motivation for this work was to develop an assess-
ment system that would allow to define and correct
tests via web. Although our work was conceived in
the context of the FacultadeVirtual, the developed as-
sessment system can be easily integrated in any other
e-learning system or even operate on its own.
The structure of the rest of the paper is as follows.
In Section 2, we compare different systems available
for evaluating students, assessing the convenience of
developing a new one. In Section 3, and as a result
of the previous comparison, we identify the function-
alities that should be provided in the new assessment
system. In Section 4, we describe the system from a
functional and technological point of view. Finally,
in Section 5, we draw some conclusions and set some
lines for future work.
2 COMPARISON OF
ASSESSMENT SYSTEMS
Nowadays, there are many assessment systems avail-
able. Therefore, first of all we needed to analyze the
most relevant ones in order to decide whether one of
them could be used in our context or, on the contrary,
it was convenient to design and implement a new one.
So, we analyzed 21 assessment systems that we con-
sider significant. In this section, we briefly show the
conclusions of our analysis. We evaluated to which
degree they supported the following features:
1. Functionalities offered to the different roles in-
volved in the learning-teaching process. We con-
sider the existence of three types of users: ad-
ministrators, teachers and students. Each of them
require to access and use the system in a differ-
ent way. Therefore, the assessment system should
take into account the needs of all of them.
2. Features of the graphical user interface. We con-
sider the usability of the interface (i.e., whether it
can be easily used by people not familiarized with
computers) and whether the interface is available
in several languages. Moreover, we take into
account if knowledge about some computer lan-
guage (e.g., HTML) is needed in order to manage
the system.
3. Features of the tests that can be generated. We
evaluate the available test presentation formats
(web pages, plain text, proprietary formats, etc.)
and whether modifications can be easily per-
formed with the goal of adapting them to other
environments. We also check if it is possible to
structure the tests in sections including different
types of questions, if we can set a maximum num-
ber of attempts and the maximum amount of time
allowed for the test, and whether the exercises and
questions can be generated choosing randomly
among several alternatives. Finally, we also con-
sider if the system supports the inclusion of mul-
timedia materials (images, video, audio, etc.).
4. Features of the questions/exercises allowed. We
consider which types of questions are supported,
specially if it is possible to include multiple/single
choice and free-text questions (e.g., essays), as
these are the most common types of questions in
Higher Education e-learning environments. We
also consider interesting to check whether it is
possible to define cluesthat can help and guide the
students when they find difficulties. Finally, it is
also interesting to be able to classify the questions
in differenttopics and according to their difficulty.
5. Features of correction. We consider if the sys-
tem has the ability to automatically or semi-
automatically correct some types of tests (e.g.,
tests not including free-text questions). We also
analyze the quality of the information presented
to the student when he/she submits his/her an-
swers/exercises (e.g., grade obtained, advice on
which topics should revise, correct choices, sam-
ple correct answers, etc.).
6. Support to track the performance of students. This
is a key feature, as otherwise teachers would not
be able to monitor the learning process and adapt
themselves to the needs and the unexpected situa-
tions detected.
7. Security. We consider whether measures are taken
to keep the privacy and integrity of the informa-
tion stored, and whether there are mechanisms to
try to prevent cheating when performing exams
online.
8. Features concerning the technologies used to im-
plement the system. We consider whether a pro-
prietary or open technology has been used, its
scalability, and whether it is easily extensible to
include new modules/functionalities.
We present a summary of the comparison in Ta-
ble 1, where we use the following symbols
1
:
1
The complete survey is available (in Galician)
at http://webdiis.unizar.es/
˜
raqueltl/
Archivos/Ficheros/Memoria.doc.gz.
DEVELOPMENT OF AN ON-LINE ASSESSMENT SYSTEM TO TRACK THE PERFORMANCE OF STUDENTS
453