experts approved of the proposed criteria because
such resources could help boost the learner’s
motivation and provide useful information to
complement the text. Similar comments were made
in regard to the next highest scoring item: ‘vídeo’
(3.6).
Then came ‘text’, ‘sound’ and ‘multimedia’
(3.5), whose slightly lower score reflected the fact
that these are highly specialized domains with which
some of the experts were unfamiliar.
The experts deemed it important, therefore, that
those using the evaluation instrument either be
acquainted with the subjects covered by the various
criteria, or to omit those criteria from their
evaluation.
The experts approved or fully approved of all of
the items in the final section of the instrument,
covering usability criteria with a specific focus on
navigational aspects (figure 4).
The highest scoring item was ‘are web pages simple
and devoid of heavy graphics’ (4.0). Next came
‘does the user know the places they are visiting and
the objectives of the LO’ (3.9). This was followed
by three separate items: ‘do home pages slow down
the user’s interaction with those pages’ (potentially
demotivating); ‘is the page structure flexible enough
to allow the user to play an active role during
navigation’; and ‘is the user aware of where they are
within the site architecture’ (3.8).
The remaining five navigation-relation items all
achieved a score of 3.7.
The post-evaluation face-to-face interviews with
the experts enabled us to gather qualitative
information on the instrument itself together with a
number of suggestions for their improvement. All of
the experts agreed with the criteria proposed and the
items considered.
They also suggested editorial changes in the
wording of the items. Some said that it was
advisable to avoid the use of expressions such as
“must have” because they were too imposing and
could complicate matters for the evaluators.
On the other hand, the items should be worded
briefly and should avoid using examples likely to
influence the evaluation. All figures in this paper
have been corrected according to the experts’
qualitative evaluation.
In the final comments, some of the experts
suggested other kinds of scales aimed at rating
specific aspects of LO quality. Based on their
suggestions we have decided to introduce the
following version into our instrument as shows the
Table 1.
Table 1: LOs evaluation rating scale.
Scale
Range
Value
1,0 – 1,5 Very Low: LO quality is too bad, it
need to be eliminated
1,6 – 2,5 Low: LO quality is bad, it requires a
big improvement
2,6 – 3,5 Acceptable: LO quality is not bad,
however it needs to be improved
3,6 – 4,5 High: LO quality is good but can be
improved
4,6 – 5,0 Very High: LO quality is quite
good, it does not need to be
improved
For a balanced LO quality evaluation we
suggest calculating an average final score for each of
the four sections of the instrument so as to be able to
extract a specific value to add to our LO metadata
typology based on the LOM 9. Classification
metadata category (Morales, García and Barrón,
2007b). The aim here is to introduce numeric values
that will help the user find and retrieve LOs
according to quality-related criteria, and enable us to
develop more sophisticated LO management
capabilities, e.g. automated means using intelligent
agents that will pave the way for new quality-based
LO management tasks (Gil, García and Morales,
2007), (Morales, Gil and García, 2007).
3 CONCLUSIONS
Our model LO quality evaluation instrument
contains a wide variety of criteria aimed at
enhancing the core pedagogical quality of LOs:
meaningful logical and psychological criteria. The
first set of criteria concerns curricular issues, i.e.
whether the LO is consistent with the study
programme objectives, content, activities and so on.
The second centres on the learners’ characteristics:
learning ability, motivation, interactivity, and so on.
In order to produce a holistic evaluation of an
LO’s quality as a pedagogical digital resource (in
line with our definition of what constitutes an LO),
the instrument also focuses on assessing technical
criteria. As these types of resources can consist of
different kinds of media, our model therefore takes
into consideration the most commonly used
multimedia resources: images, video, etc.
Finally, since LOs are composed of different
kinds of media, it is important to ensure that each is
rendered accessible: e.g. an Internet site or web page
designed to enable all kinds of users to access them,
AN EVALUATION INSTRUMENT FOR LEARNING OBJECT QUALITY AND MANAGEMENT
331