educational taxonomies in the learning process is to
identify the level of exams’ questions depending on
cognitive levels. For instance, course exams should
include questions that asses different level of learning
effectively.
Based on educational and pedagogical theories,
researchers proposed different taxonomies to help
educators in developing learning resources, assess-
ments, and learning outcomes. Among the proposed
taxonomies is the Bloom’s taxonomy (Bloom, 1956)
and its revised version (Krathwohl, 2002). It is mainly
based on six levels of the cognitive learning process:
Remember, Understand, Apply, Analyze, Evaluate,
and Create. Furthermore, a list of different action verbs
has been identified to describe the intended learning
outcomes of a course. The revised version of the
Bloom’s taxonomy is mainly mapping cognitive
dimensions with the knowledge dimensions. Another
taxonomy is the so-called SOLO taxonomy (Biggs &
Collis, 1982) which has the levels: Prestructural,
Unistructural, Multistructural, Relational, and
Extended Abstract, and which are not only restricted to
cognitive aspects but also deal with knowledge and
skills. More educational taxonomies that are used in the
assessment and evaluation are reviewed in (Fuller et
al., 2007; Ala-Mutka, 2005).
In general, there are three types of exam
generation approaches (Cen et al., 2010). The first
type is related to offering a question repository that
can be explored by educators to select the questions
for a specific exam. This type is almost similar to the
manual creation of the exam. However, the educators
can inspect the stored questions in the database by
means of a user interface. The second type is related
to generating the exam based on random selection of
the questions. The third type is related to generating
the exams by means of AI algorithms for realizing
predefined rules to provide the exam.
Normally, identifying simple or difficult
questions is mainly depending on the educators’
intuition and experience. Furthermore, similar
questions or repetition of questions can happen in
manual created exams. Another possible drawback is
related to careless division of the total mark of the
exam over the composed questions. Finally, manual
preparation of exams with the alignment of questions
and learning outcomes requires a high mental
demand. Given the previous drawbacks, there is a
possibility of having poorly designed assessments
which can lead to unsatisfactory competing rate of the
intended learning outcomes of the course. For the
previous obstacles, we propose a systematic approach
to diminish such drawbacks. The proposed approach
is used for generating automatically course exams,
quizzes, exercises, and homework using Bloom’s
taxonomy. Furthermore, the proposed approach
divides the total mark of the exam over the selected
questions in the exam based on predefined criteria.
This paper is structured as follow. The next
section presents a number of existing tools that are
proposed to generate exams automatically. Then, the
proposed approach to generate examination
automatically is discussed. Furthermore, a list of
requirements and the conceptual framework are also
presented. Next, the implementation and the
developed prototype are discussed. Finally, the paper
is concluded and future directions are presented.
2 LITERATURE REVIEW
This section reviews related work dealing with
generating exams out of question bank automatically.
There are different attempts conducted to
consider the Bloom’s taxonomy for generating exams
automatically. For instance, the work presented in
(Kale & Kiwelekar, 2013) considers four constraints
to generate the exams. The constraints are proper
coverage of units from course’s syllabus, coverage of
difficulty levels of the questions, coverage of cognitive
levels of Bloom's taxonomy and the distribution of the
marks across questions. Such constraints are
considered for developed algorithm to generate the
final paper exam. Another interesting work for
classifying questions according to Bloom’s taxonomy
is presented in (Omar et al., 2012). The proposed work
is a rule-based approach. However, the generation
process of exams is not considered in this work.
Other approaches are related to the use of Natural
Language Processing (NLP) to classify questions and
assign weight for each question. For instance, authors
in (Jayakodi et al., 2016) shows promising results in
using NLP techniques to weight questions according
to Bloom’s taxonomy cognitive levels. Other
researchers (Mohandas et al., 2015) propose the use
of Fuzzy logic algorithm for the selection process of
the questions depending on difficulty level.
Different tools were developed to validate the
proposed approaches in the context of automatic
exam generation. For instance, (Cen et al., 2010)
presented a tool using J2EE tools to support educators
by identifying the subject, questions types, and
difficulty level. Accordingly, the proposed prototype
will generate the exam in MS document format. The
proposed work does not map questions to the course
syllabus and Bloom’s taxonomy. Other researchers
(Gangar et al., 2017) proposes a tool which
categorizes questions as knowledge-based, memory-