every task and generate automatically some hints
and warnings for errors. This can be used also for
identifying mistakes by the instructors.
Since the actions of the learners are stored task
by task, the criteria for task assessment are created
on the task definition it is possible to compare the
results of learners’ work to the results provided on
the task creation. The assessment can be done on the
entire work or on the set of tasks – partial
assessment. By registering the learner’s work task
by task and relating the learning tasks to learning
objectives we can assess the actual learning
progress.
4.1 Measure the Relevance of a
Task-focused Collection
The assessment of the relevance of the selected
multimedia objects (primary images) is important
part of evaluation of the learners work, because it
captures the ability of the learner to distinguish
different aspects and elements and symbolism of the
iconographical objects. The approach for assessment
is based on comparing the symbolic representation
of the learning task with the objects semantic
(metadata) descriptions. In this way the learning
environment may evaluate if the developed
dedicated task-focused collections of multimedia
objects contain sufficiently rich and various
illustrative material to back-up the analyses (e.g.
checking minimal number of analysed objects,
sufficient coverage of authors, iconographic schools,
time periods, diversity of desired characteristics
etc.).
4.2 Assessment of the Quality of the
Analysis
Analysis made by the learner is a process related to
comparing different characteristics of the
iconographic objects. The analysis requires the
learner to learn essential aspects, features, relations,
artefacts and directions in the learning domain. The
required knowledge is achieved by performing the
learning tasks. As a result of the analysis, the learner
prepares a textual description that contains the
learner contribution. The quality of the learner
analysis is the subject of the assessment. The
assessment of text contribution cannot be fully
automated. The essential part here is to support the
evaluators by presenting to them some measurable
counters. We intend to use the methodology
described as Knowledge rich approach in (Osenova
P. and Simov K., 2010). The knowledge rich
methods rely on the analysis of the text by using
knowledge sources, external to this text, such as
ontologies, lexicons and grammars. These sources
are used to achieve a semantically rich text analysis
which to explicate the conceptual content of the
learner’s answers.
The assessment of the learner analysis is based
on finding the terms for domain concepts (and their
possibly different linguistic forms) in the text
entered by the learner. The evaluator might use the
following parameters generated by the system: used
(obligatory or desirable) terms, missed terms,
frequency of terms occurrences, terms collocations
in a paragraph (a hint that they may be also
semantically related in the text) used by the learner
and instructor and the number of used terms by the
learner. It is obvious that, if the number of used
terms by the learner and number of terms used by
the instructor are approximately equal, the
probability of analysis to satisfy the learning
objectives is high. The system cannot automatically
grade the learner work, but it supports the
assessment by presenting to the evaluator
meaningful counters, for example – the concepts that
the learner missed or terms that the learner overused.
The presented counters can help the instructor, but
the final grade is done by the evaluator. We intend to
continuously use the approach in order to send some
hints to the learners when they missed some
concepts in their analysis.
4.3 Evaluation of the Visual
Appearance of the Entire
Document
Since the learning tasks are not highly prescriptive
by definition about formatting the document, the
evaluation of visual appearance of already prepared
multimedia document has very subjective elements.
In general the tasks do not prescribe anything about
visual appearance for the texts and the images into
the document. Nevertheless, the document is
structured in accordance with learning tasks, and
also the analysis seems to be suitable, according to
the terms uses, the entire document might not has
good appearance for the external users. Even the
visual appearance of the document is very subjective
it seems to be depended of the formatting features
like: the size and the position of the images and
texts; amount and balance of texts and images,
colours of the texts, background and many others.
Since there the criteria for assessment of visual
appearance are mainly qualitative, and the measure
of visual appearance is not the major goal, subject of
MEASURING THE LEARNING PROGRESS IN A "LEARNING BY AUTHORING" SEMANTIC WEB SERVICES
BASED ECOSYSTEM
417