in Germany. This work also discusses the complexity
of creating heterogeneous items. For instance, an
item can become difficult to understand by changing
a single letter in a word or the word order on that
item. For evaluations they selected 40 out of 70
items, of which 10 and 30 had hard and easy levels of
difficulty, respectively.
A study presented in (Nguyen et al., 2017) was
conducted by comparing CBA with traditional meth-
ods on 74 undergraduate modules and their 72,377
students. The modules belonged to a variety of dis-
ciplines (25% in Science & Technology, 22% in Arts
& Social Sciences, 14% in Business & Law, 9% in
Education & Languages, and 30% in others). The
authors found that the time devoted to evaluation ac-
tivities had a significant relation to the passing rates.
Their work also concluded that the balance between
weekly evaluations and other activities by CBA has a
positive influence on the passing rates.
Therefore, if we have ICT that make teach-
ers’ tasks easier when they conduct more evalua-
tions (Adkins and Linville, 2017; Nguyen et al., 2017;
Nguyen et al., 2018) with heterogeneous items (En-
gelhardt et al., 2017) and paper-and-pencil (as veri-
fied in (Hakami et al., 2016)), but generating and cor-
recting tests automatically, the students will probably
achieve a better performance.
In this paper we present the system MakeTests,
whose open source code is available on GitHub. Its
code can be easily adapted to new question types, use-
ful for the generation and correction of printed ex-
ams. Section 2 motivates the discussion of some re-
lated works compared with MakeTests. In Section 3
we explain how to use MakeTests’ method, later ex-
emplified by two experiments described in Section 4.
Finally, some future work and conclusions are drawn
in Section 5.
2 RELATED WORKS
The Introduction presented studies that reveal the im-
portance of making frequent assessments. Now we
comment on some works more related to what is pro-
posed in this article, namely ICT that facilitate the
process of creating and correcting heterogeneous (or
parametric) questions of various styles.
In (Smirnov and Bogun, 2011) the authors present
an ICT resource of visual modelling to teach science
and mathematics that includes solving scientific prob-
lems. Implemented with PHP programming language
and MySQL databases, the ICT relies on uniform re-
lational database of teachers, students, educational
projects and educational studies. The authors tested
the ICT for over 1,000 students of secondary schools
in Russia. As an example, groups of 5-6 students
had to solve problems with Newton’s Second Law.
They filled out tables of values, visualized graphics
and tried to decide on the problem analytically. How-
ever, the authors did not work on actual graphical in-
terfaces but focused on the methods to create activi-
ties. Therefore we cannot draw conclusions about the
usability of their ICT regarding students’ performance
in tests.
The formal specification language ADLES was in-
troduced in (de Leon et al., 2018; Allen et al., 2019).
It is open-source and devoted to formal specification
of hands-on exercises about virtual computing, net-
working, and cybersecurity. With ADLES educators
can design, specify, and semi-automatically deploy a
virtual machine (VM) for classes, tutorials or compe-
titions. Students access the VM in order to accom-
plish tasks.
In (Zampirolli et al., 2019) the authors present the
MCTest platform vision.ufabc.edu.br, which is devel-
oped in Django and MySQL, whose open source code
is available on GitHub. Hence one can install this
platform in several institutions, and the system admin-
istrator (SA) will register the departments, courses,
disciplines and professors. To each course the coor-
dinators attribute Topics, Classes, Questions, Exams,
Professors and Students. Any professor can also cre-
ate Classes, Questions and Exams. All these enti-
ties are created on web browser windows. Classes
and Questions can also be imported from CSV files,
in which the students’ Id, name and email are speci-
fied in the case of a Class, but Questions follow an-
other CSV formatting. The purpose of that paper is
to describe the process of creating parametric ques-
tions of either dissertation or multiple-choice type
which rely on some Python code in their scope. In
this way, MCTest produces individualized exams, one
for each student, but they are all contained in a sin-
gle PDF file for each class. Moreover, a professor
that lectures a course can generate a unified exam
for all of their classes. The correction is automatic
for multiple-choice questions providing the profes-
sor digitizes the answer cards into another PDF to
be uploaded by the system. For questions that in-
clude program codes the student can submit the an-
swers to Moodle (moodle.org) via VPL (available at
vpl.dis.ulpgc.es) for automatic correction. As we are
going to see in the next sections, MakeTests includes
more different question styles than MCTest.
With the intention of reducing plagiarism
(Manoharan, 2019) reports how significant it is to
create personalized multiple-choice questions. That
work describes the insufficiency of just shuffling
CSEDU 2021 - 13th International Conference on Computer Supported Education
246