is fully coordinated. This is not the case of IPC,
whose greatest problem resides in a fair evaluation of
its students.
IPC has in average 50 classes per year and 30
students per class. It is scheduled for five hours per
week, and two of them are lab lectures. Three classes
meet for each classroom lecture, where then we have
90 students. Of course, each laboratory is supervised
by a single professor. For IPC they are circa 40 in
total and each one has a different way of evaluating
students. Hence IPC does not count on a unified ex-
amination and has a large variance in marks.
This problem does not happen in IPH, where both
examination and content are fully unified. Students
access all the content of IPH in a same platform
called e-learning Tidia and developed through https:
//sakaiproject.org. In (Zampirolli et al., 2018) we ob-
tained, for students that failed IP in 2017, the variance
of 2.3% for IPH, which is negligible compared with
16.3% for IPC. See that reference for details on sta-
tistical analyses performed on all classes of IPC and
IPH from 2009 to 2017.
These statistics must be taken as an overview be-
cause IPH differs from IPC in the following setup:
in each trimester circa only 180 students enrol on the
course, and in average they are supervised by just five
professors but they count on teaching assistants that
help the students solve 35 lists of exercises.
In IPH three exams (the first in classrooms and the
others in laboratories) are all furnished by our online
generator of parametric questions, a platform named
webMCTest. We strongly believe that IPC will also
present a similar variation as IPH when that course
becomes coordinated. This will give more evidence
that webMCTest represents a valuable means of fair
evaluations.
2 RELATED WORKS
In (Zampirolli et al., 2018) we presented evaluations
of students for dissertation questions carried out with
a previous version of webMCTest. This and for-
mer versions are just called MCTest because they are
not completely online. Parametric questions were
not discussed in (Zampirolli et al., 2018) , which
used MCTest 4.0, though they are already supported
by this version. With webMCTest parametric ques-
tions are prepared as explained in Subsection 3.2, and
more specifically we resume the method for MCTest
in Subsubsection 3.2.2, which can also be found in
(Zampirolli et al., 2016).
Before we draw comparisons between ours and
other methods, if the reader prefers to first have an
overview of webMCTest the short video at
https://youtu.be/SxQlw9ADxe8
can be quite helpful.
In (Gusev et al., 2016) the authors present a tool
of online multiple choice tests for student evaluation.
Their tool resorts to Information and Communication
Technology (ICT). In their database, questions are
grouped by content: if each three of them that treat
of the same content are answered correctly, then the
student is directed to the next content. Otherwise the
student gets extra questions on the same subject in
order to reinforce learning. Their questions are writ-
ten in XML files with several tags that describe the
many variations of multiple choice questions, such
as weight, number of correct answers and penalty for
wrong answers. Their work is similar to the one pre-
sented in (Zampirolli et al., 2018), which however dif-
fers in format (L
A
T
E
X instead of XML) and purpose
(hard copy instead of online).
A more specific reference is (Del Fatto et al.,
2016), which presents an ICT applied to the course
Operating Systems of a master’s programme. The au-
thors programme in Bash (shell commands of Unix
operating system), and in their work they present ex-
ams consisting of 45 exercises in Bash that were sent
through LMS Moodle. These exams were taken in the
very LMS and they made use of the available data-
banks of questions.
In (Kose and Deperlioglu, 2012) the authors intro-
duce an ICT devoted to solving problems in the pro-
gramming language C. This ICT can diagnose how
much knowledge of C the student has, and it can also
elaborate specific questions with some feedback and
hints that help solve each problem. The authors devel-
oped a resource, which is an interface for the student
to click-and-hold and then release in order to make up
a code. This interface is however incorporated in an
e-learning system. The students learn from their mis-
takes by means of warning messages whose model is
based on restrictions. Their ICT chooses new prob-
lems for the students according to the questions they
have already solved and the time taken for that. Also,
there is an evaluation platform on which students can
take multiple choice tests prepared according to their
learning levels.
For the research area of Medical Education (Gierl
et al., 2012) introduces a cognitive model to generate
a database of multiple choice questions. Such models
follow the representation of knowledge and skills that
are necessary to solve a problem (Pugh et al., 2016).
The study presented in (Gierl et al., 2012) begins with
a specialist in Medicine creating a cognitive model
to evaluate a specific topic. For each question many
correct alternatives are produced. Finally, the genera-
Online Generator and Corrector of Parametric Questions in Hard Copy Useful for the Elaboration of Thousands of Individualized Exams
353