USING BLOOM'S COGNITIVE DOMAIN IN WEB EVALUATION
ENVIRONMENTS
Gustavo H. S. Alexandre, Simone C. dos Santos
1
C.E.S.A.R., Centro de Estudos e Sistemas Avançados do Recife, Bione Street, Recife, Brazil
1
UPE, Universidade de Pernambuco, Av. Agamenon Magalhães, Recife Brazil
Patrícia C. A. R. Tedesco
CIn - Centro de Informática da UFPE, Universidade Federal de Pernambuco, Recife, Brazil
Keywords: Assessment process, Bloom taxonomy, Web-based information system, ICT in education.
Abstract: This article proposes a web-based Information System based on Bloom Taxonomy, which aims to support
the assessment and tracking of learning process. From an assessment methodology defined, a prototype of
this model was implemented with focus on educational objectives, performance reports and feedbacks to the
students and teachers - called Smart Education. A short experiment was run in a Software Engineering
graduate course achieving key results in relation to its use and application.
1 INTRODUCTION
Information and Communication Technology (ICT)
is provoking notable cultural and educational
changes when used as important resources of
instrumentation of research and academic renewal,
benefiting professors, researchers and students
(Levy, 1993). Considering the internet resource as
one of the main actors, and its application in the
classroom context, as an outstanding support tool to
teaching activities, offering a "virtual extension of
the actual classroom" (Gomes, 2005).
This new educational context provides education
with greater flexibility and accessibility to
information; however, it demands the construction
of new pedagogical practices and concepts that
respond to students and professors needs that benefit
from the use of ICT. Particularly, there is the
challenge of “learning assessment”, looking for
incorporating the peculiarities brought by the digital
learning environments during the construction of
instruments and assessment strategies that are
appropriate for the new educational contexts. In this
process, it is essential to define assessment
objectives accurately, choosing the proper manners
and methods, making it possible to evaluate with
higher effectiveness (Bloom, 1977).
Educational objectives elaboration can be made
based on classification schemes. The “Taxonomy of
Educational Objectives - Cognitive Domain” is one
of the most popular schemes, elaborated by Bloom
and his contributors in (Bloom, 1977). Although
Bloom's Taxonomy is divided in three areas
(Affective, Psychomotor and Cognitive), the
cognitive domain was selected as the center of this
research, considering that the achievement of these
objectives is an essential requirement for the
majority of educational programs and training.
Considering the presented context, this article
proposes an Information System model on the Web,
based on Bloom's Taxonomy regarding the
Cognitive Domain, with the purpose of supporting
the assessment and accompaniment of the learning
process. A prototype of this model was
implemented, entitled Smart Education, starting
from the definition of an assessment methodology
focused in the definition of questions based on
educational objectives, accompaniment and
feedback reports for students and professors. Smart
Education works attached to the virtual learning
environment Moodle (free and open source)
[www.moodle.org], from which are extracted all the
basic information of courses, subjects, teachers and
students. A case study was carried through a post-
graduate course in Software Engineering, presenting
satisfactory results regarding its application.
This article is divided into six sections. Section 2
presents some of the concepts used in the definition
53
H. S. Alexandre G., C. dos Santos S. and C. A. R. Tedesco P. (2010).
USING BLOOM’S COGNITIVE DOMAIN IN WEB EVALUATION ENVIRONMENTS.
In Proceedings of the 2nd International Conference on Computer Supported Education, pages 53-59
DOI: 10.5220/0002799600530059
Copyright
c
SciTePress
of the assessment methodology, described in Section
3. Smart Education, developed from this assessment
methodology, is described briefly in Section 4, as
well as a carried through experiment, presented in
Section 5. Finally, the last section presents the final
conclusions and considerations.
2 ASSESSMENT IN THE
LEARNING PROCESS
The assessment process as part of the learning
process must be based on clear and well defined
propositions. It is now necessary to make a
distinction of the two words that were repeated in
this article: assessment and evaluation. For this
article the understanding of assessment focuses on
learning, teaching and results. It provides
information to improve teaching and learning. The
information collected is used by teachers in order to
improve the learning environment, and is still shared
with students to help them navigate on their studies
and better learning. The information is focused on
the student and not the classification.
The term evaluation focuses on the comparison,
classification. It is the summative evaluation of
character. It is concerned only with what was
learned. The ultimate goal is to achieve an overall
grade / score.
In (Earl, 1998), six purposes of assessment are
presented: (1) Know about the students, identifying
the level of previous knowledge that they possess
when initiating a course or discipline; (2) Verify
which level of educational objectives have been
reached; (3) Continuously improve the teaching and
learning process; (4) Detect the learning difficulties,
discriminating and characterizing its possible causes;
(5) Promote students according to the proficiency
level obtained in the evaluation and; (6) Motivate
and provide feedback to students. In this context, the
assessment of learning takes a central position
within the process of teaching and learning in a
cycle that begins with students' knowledge and the
definition of educational objectives, proceeding with
the choice of methods and criteria of assessment.
As already stated in the opening of this article,
for the elaboration of educational objectives,
professors can make use of classification schemes,
such as the Taxonomy of Educational Objectives -
Cognitive Domain, elaborated by Bloom and his
contributors. The cognitive domain is concerned
about information and knowledge. This way, the
achievement of cognitive objectives is the
fundamental activity of most educational programs
and training. According to Bloom, this domain is
subdivided in six main abilities:
Knowledge: defined as the student's ability to
memorize learned information. The assessment
of this category verifies the capacity of the
student to retain what was taught.
Comprehension: student's capacity to reason, to
understand or to learn the concepts and
information worked by the professor. At this
point, the assessment verifies student's
interpretation and explanation capacity.
Application: utilization of learned information
in real situations. Once that a student already
knows a concept and understands it, he is apt to
apply it. When a student is able to correctly
apply a concept, it can be said that he
"learned", because he knows, understands and
uses the new concept to solve real problems.
Analysis: information must be decomposed and,
thus, to relate and understand its formation and
organization. The assessment of this cognitive
ability has the intent to assess convergent
production capacity.
Synthesis: capacity of joining two or more
concepts together to form a single one. The
assessment of this ability verifies creative and
productive capacity.
Evaluation: assessment of information’s
importance to attend to a set of norms and
criteria. Here the assessment verifies all the
other categories.
The hierarchy of these cognitive abilities
follows, according to its order, from the simplest and
concrete (Knowledge) to most complex and abstract
(Evaluation).
Bloom, in (1983), defines that three modalities
of assessment can be carried through the circular
process of assessment: Diagnostic, Formative and
Summative.
The Diagnostic assessment is used to determine
if the student has the necessary prerequisites for the
acquisition of new specific knowledge. The
recommendation is for this evaluation to be carried
out at the beginning of the course, semester or unit
of education (Haydt,
2000).
The Formative assessment is done with the
intention of verifying if the student is reaching the
established objectives during the course. This
assessment aims at, basically, evaluating if the
student will be able to continue to a subsequent stage
of the course (Albuquerque, 1995). Therefore,
formative assessment allows: to provide feedback to
the student of what he learned and what he still
needs to learn; to provide feedback to the professor,
identifying students' failures and which aspects of
instruction that must be modified; to look for the
attendance to the individual differences of students
CSEDU 2010 - 2nd International Conference on Computer Supported Education
54
and prescription of alternative measures for
recovering from learning failures (Bloom, 1977).
Finally, the Summative assessment, the
assessment model most commonly used by
educational institutions, is used to classify students.
Held at the end of a school year or unit of
instruction, it consists of classifying the students in
accordance with levels of exploitation previously
established, generally aiming at its promotion from a
level to the next one, therefore it totalizes the results
of a concluded study. Through the use of this
assessment model it can be observed if the
established objectives were reached by the students
and also to provide data to refine the process of
teach-learning (Haydt,
2000).
In (Santos, 2006), the author says that
assessment functions should not have been used
separately, because each one serves as complement
to the other. Thus, diagnostic function would only
mean something if used at the beginning of didactic-
pedagogical process, which would serve to indicate
the direction to be followed in the teach-learning
process. This process should be constantly reviewed
by the data gathered from the formative assessments,
in order to keep educational objectives as designed,
making it possible to classify each student by the
average achieved in its exploitation, according to the
metrics established by the educational institution.
3 AN ASSESSMENT
METHODOLOGY PROPOSAL
An effective assessment methodology is the one that
doesn't worry only about the condition of pass / fail,
but which is concerned, especially in monitoring
student's behaviour before an assessment, also
providing resources to enable it to strengthen and
improve his knowledge on the weak points identified
by the assessment.
Article written by Kirti Garg and Vasudeva
Varma propose a different methodology in pursuit of
quality in teaching-learning process. This article
proposes the use of case studies, carefully designed
to be used as instruments of student assessment. The
case studies help in assessing student competence on
important aspects and learning goals and are still
aligned with the goals to motivate, learn and make
the feedback. (Garg, Varma, 2009)
Aiming at a really efficient assessment process,
contemplating the main features and goals of
assessments and, thus, allowing a better use of the
different evaluation instruments, an assessment
methodology was defined and systematized, based
on Bloom's Taxonomy. Figure 1 illustrates this
methodology
stages and activities, divided in three
Figure 1: Proposed assessment methodology.
phases: Preparation, Formative Evaluations and
Summative Evaluation.
At Preparation phase, questions that will form
exams are created, both formative and summative. It
is also in this phase that are defined which cognitive
abilities the professor desires to evaluate. Professors
must be very cautious during questions' creation,
mainly referring to its difficulty level and the
amount of questions available for each level. This
precaution is vital for preventing the problem of
“false expectations” for the student. The choice of
which Bloom's cognitive abilities the professor
wants to evaluate must be made following his own
criterion, having the evolution of teaching and
learning process as reference. Each chosen ability
will have to be associated to one or more questions.
Second phase is dedicated to the elaboration and
application of formative evaluations, focused on the
accomplishment of continuous assessments, with the
intention of identifying learning gaps. The amount
of assessments to be applied in this phase is defined
by the professor. However, it’s necessary to always
have an amount of formative evaluations equal or
superior to the summative evaluations. The
evaluations that are carried through in this phase
won't determine the approval or failure of the
students. Therefore, the values achieved by the
students on these evaluations will serve only for the
measurement of their acquisition of knowledge
level.
Finally, at the third phase, summative
evaluations are elaborated and applied, aiming at
verifying the learning results achieved by the
students, in accordance with the achievement levels
that were established which will determine the
approval or failure of the students.
Formative and Summative Evaluations stages are
composed of four activities:
Activity 1 - Performance Prediction: in this
stage students answer a self-assessment exam that
will measure the degree of confidence each student
has in answering questions related to subjects/topics
that form the evaluation. The self-assessment exam
consists of a questionnaire to be filled out by the
USING BLOOM'S COGNITIVE DOMAIN IN WEB EVALUATION ENVIRONMENTS
55
student, answering with one of the following options
“Yes”, “Perhaps” and “No” about his ability for
solving questions related to subjects and topics that
will form the exam.
Activity 2 - Exam Resolution: in this stage, exam
is applied to the students, who must try to resolve
the questions with the objective of identifying the
degree of knowledge in each subject or topic of
disciplines.
Activity 3 - Exam Correction: in this stage,
professor corrects student's exams, comments on the
given answers per item and releases the corrected
exams so that the students can verify in which
questions had gotten rightness and errors. It is in this
stage that occurs the generation of quantitative and
qualitative indices that will contribute for a
successful accomplishment in the next stage.
Activity 4 - Feedback and Orientation: in this
stage, professor elaborates and sends a feedback for
the student, based on their performance. Using the
quantitative and qualitative indices generated with
the correction of evaluations during the previous
stage, the professor will analyze them and will send
his feedback to the student. The indices help to
indicate with precision the aspects where the
students are having better and worse performance,
making the creation of a feedback easier for the
professor.
4 THE INFORMATION SYSTEM
SMART EDUCATION
With the purpose of validating the methodology
proposed in section 3, an information system
centered in an effective assessment process was
implemented, named Smart Education. Its proposal
is to assist in questions and evaluations
management, as well as to facilitate learning
accompaniment and proving feedback for students
and professors.
This system is basically divided in two profiles:
professor and student. Professors and students go
through the login process, gaining access to system
features in accordance with their profile. Figure 2
presents professor's profile interface.
Smart Education works attached to the virtual
learning environment Moodle (free, open source)
[www.moodle.org], from which are extracted all the
basic information of courses, subjects, teachers and
students, this way contents already registered doesn't
need to be migrated and neither to reply the courses
structure already created within the virtual learning
environment, common nowadays in many
educational
institutions. So, to start using the system
Figure 2: Smart Education: Professor's profile UI.
it is necessary that users (teachers or students) are
previously registered in Moodle. It is precisely with
this registry, which both teachers and students may
log into the system. After a successful authentication
operation a window is shown with its content related
to teacher or student, depending on the profile
registered on Moodle.
In general, professor can create exams for all
three methodology phases (Preparation, Formative
and Summative), to apply and correct them; create
questions containing several formats and types
associated with Bloom's cognitive abilities; organize
questions by subjects and topics; consult reports
with diversified information regarding students'
performance in determined subjects, topics and
cognitive abilities and to produce his students
learning follow up. Professor can also visualize the
assessment methodology indicated by the tool.
One of this system's differentials is in the feature
“Questões”, there professors can find the “Manter
Questões” functionality, that allows them to register,
modify, exclude, search and visualize questions,
which can be both discursive (open) and objective (
multiple choices) and which will be used on exams'
creation. During the registration of a new question
some information are requested by the system, such
as, the difficulty level, subject, topic and to which
Bloom's cognitive ability the question is related to,
as illustrated at Figure 3. Thus, when a professor
accesses the questions with the intention of
elaborating an exam, he will also be able to check
the difficulty level of each one of them,
automatically calculated by the tool and will have
the certainty that the exam will contain only
questions related to the subjects, topics and
cognitive abilities chosen.
Other important feature is “Acompanhamento”,
which is responsible for providing the student’s and
class’s performance reports to professors,
automatically after the correction of all exams are
concluded.
This report will provide the qualitative
CSEDU 2010 - 2nd International Conference on Computer Supported Education
56
Figure 3: Smart Education: Professor UI.
Figure 4: Sample performance report on assessments of a
student.
indices referring to exams' results (as illustrated in
Figure 4). It will also contain performance charts
divided by topics, cognitive abilities and level of
knowledge acquisition referring to the current exam
or the last ones. Based on this information professor
will be able to provide feedback to students, added
by his personal opinion, if he believes to be
necessary. This report will be automatically stored in
the database, to count as historical data of student's
learning development.
For students there are features like answering
exams; consulting accompaniment reports
containing results achieved in the exams; to
visualize his exam correction and the comments
made by his professor; and to visualize all the grades
achieved for all exams of all disciplines.
5 EVALUATING SMART
EDUCATION TOOL
Smart Education tool has been used in “Software
Testing” discipline of a Master course at C.E.S.A.R.
(www.cesar.org.br), an ICT innovation institute, to a
group of four students, having three exams to be
taken: two of formative character, each one of them
including a self-assessment test, and one of
summative character, ending the assessment cycle of
the discipline. At the beginning of the two first
exams, students received orientations regarding
assessment methodology and discipline's related
educational purposes.
Students and professors were registered in
Moodle, so that they could obtain access to Smart
Education. Professors created the amount of
questions needed to be used in all exams. Altogether
30 questions were developed and for each one of
them the professor was asked to inform, besides the
actual question, subject, topic and knowledge area
related to the question, and also registering the
correct answers for multiple choice questions.
System automatically created the self-assessment
tests in accordance with the subjects of the chosen
questions. After that, an email was sent to students,
informing date, time to begin and to end the exam,
followed by the instructions and rules for taking the
exam.
Multiple choice questions were automatically
corrected by the system, whereas subjective
questions were corrected by the professor, adding
comments on each given response. After corrections
were concluded, corrected exams were sent by email
to the students. Feedback reports were generated by
the system, analysed, commented by the professor
and sent via email to each student.
Finally, a research questionnaire was sent to
everyone (professors and students) involved in the
process, containing 15 questions, aiming at making
it possible to collect opinions and impressions of the
methodology applied. Great acceptance was
identified, with an average 8,4 grade given by the
ones involved, which stated to prefer this assessment
format in order of the traditional assessment's
methods.
For a better visualization of the results achieved
during the three exams, graph displayed at Figure 5
presents each student performance. This graph
represents NAI (Level of Acquisition of
Information) that the students achieved in each of
the exams. This metric, that was adapted from
(Pimentel, Omar, 2006), is used to measure and
monitor student's degree of knowledge for each
subject or topic of disciplines, thus, the score
achieved in each exam is a NAI.
USING BLOOM'S COGNITIVE DOMAIN IN WEB EVALUATION ENVIRONMENTS
57
Figure 5: Student's performance evaluation in Software
Testing discipline.
It is possible to observe in this graph that two
students had a better performance between the first
and second exam and other two presented a
performance decrease. Important to explain that, by
following and doing all activities foreseen by the
methodology, students were able to achieve a
significant improvement in their NAIs, since it was
possible to identify with precision their learning
difficulties and to act in a precise way for correcting
them. This improvement can be noticed by
comparing the students' evolution throughout the
hole assessment process, where three students (B, C
and D) achieved at the third exam a better
performance in relation to the others two previous
ones. Student A practically kept his excellent
performance, with a reduction of only 2 points in
relation to the previous one.
It is worth mentioning that the performance
report is a very complete instrument (an average,
four pages of size), consisting of performance
graphics referring to each exam and the class,
besides abilities definition information and
professor's opinion, not contemplated in this article
for matters of space limitation.
6 CONCLUSIONS
Nowadays there is a great variety of systems that
works with students' evaluation through the Web,
such as, Sisa-Web, AvalWeb, WebTest,
HotPotatoes, Net Class, WebCT and Moodle itself,
which Smart Education is attached. However, these
tools ignore important aspects of the learning
assessment process, mainly regarding the creation of
qualitative assessments, focused on student's
learning accompaniment, seeking to identify
learning gaps and allowing the generation of
personalized and individualized feedback. The
proposal of a web system that can automate some of
these tasks and support others, represents an
excellent alternative to support the teaching and
learning process. By adopting Smart Education, the
activities to evaluate and follow student's learning
can be more agile and less costly, not representing a
reduction of responsibility to professor as an
educator, and giving them more solid and precise
information for evaluating.
Regarding the experiment presented, it’s known
by the authors that it needs to be further explored,
applying it to bigger groups and to a greater number
of disciplines. However, it was already possible to
notice that the definition of educational objectives
using Bloom's taxonomy constituted a basic element
in the assessment process, since it made possible for
professors to previously define and plan the results
to be reached by their students, as well as
establishing which cognitive abilities would have to
be developed. With the educational objectives
definition, goals to be reached were made clear,
since it made possible to measure learning quality
and effectiveness. Additionally, it facilitated the
selection of subjects to be taught during disciplines,
listing those that had greater relevance and,
therefore, would have to compose the exam
according to professor’s view.
REFERENCES
Albuquerque, I. M. (1995) Avaliação no Processo de
Ensino-Aprendizagem. Monografia, Especialização em
Planejamento Educacional, Universidade de Fortaleza,
Fortaleza.
Alexandre, G. H. S (2008). Smart Education – Uma
ferramenta WEB para avaliação e acompanhamento
do aprendizado. Tese de Mestrado, C.E.S.A.R.,
Recife.
Bloom, Benjamim S. et al. (1983) Manual de avaliação
formativa e somativa do aprendizado escolar.
Pioneira, São Paulo, 1
st
edition.
Bloom, Benjamin S. et al., (1977), Taxionomia de
objetivos educacionais: domínio cognitivo. Globo,
Porto Alegre, 6
th
edition.
Earl, Shirley; MCCONNELL, Mike; MIDDLETON, Iain
et al (1998). Assessing Student Performance: A
Course Booklet for the Postgraduate Certificate in
Tertiary-Level Teaching. Curso web, The Robert
Gordon University, Inglaterra, 1998.
Garg, Kirti; Varma Vasudeva (2009) Case Studies as
Assessment Tools in Software Engineering
Classrooms.In:22
nd
Conference on Software
Engineering Education and Training, 2009,
Hyderabad, India. IEEE Computer Society 2009.
Gomes, Maria João (2005). E-Learning: reflexões em
torno do conceito. In Paulo Dias e Varela de Freitas
(orgs.), Atas da IV Conferência Internacional de
Tecnologias de Informação e Comunicação na
Educação – Challenges’05, Braga: Centro de
CSEDU 2010 - 2nd International Conference on Computer Supported Education
58
Competência da Universidade do Minho, pp. 229-236,
ISBN 972-87-46-13-05 [CD-ROM].
Haydt, Regina Cazux. (2000) Avaliação do processo
Ensino-Aprendizagem. Ática, São Paulo, 6
th
edition.
Levy, Pierre., (1993). As tecnologias da inteligência: o
futuro do pensamento na era da informática. Rio de
Janeiro, edition. 34.
Pimentel, E. P.; Omar, Nizam. (2006) Métricas para o
Mapeamento do Conhecimento do Aprendiz em
Ambientes Computacionais de Aprendizagem. In: XVII
Simpósio Brasileiro de Informática na Educação,
2006, Brasília. Anais do XVII Simpósio Brasileiro de
Informática na Educação, 2006. p. 247-256.
Santos, J. F. S. (2006) Avaliação no ensino a distância.
Revista iberoamericana de educacion (Online),
Madrid, v. 38, n. 4.
USING BLOOM'S COGNITIVE DOMAIN IN WEB EVALUATION ENVIRONMENTS
59