influenced by the information from the visual fields
and consider all the other information from senses,
performed evaluation with significantly higher
thoroughness, validity, effectiveness and sensitivity
than the field dependent individuals, who tend to be
greatly influenced by the dominant visual field.
The set of experiments comparing effectiveness
of MOT technique (inspection by metaphors of
human thinking) with heuristic evaluation, cognitive
walkthrough and “think aloud” testing for novice
evaluators was introduced by Frøkjær and Hornbæk
(Frøkjær and Hornbæk, 2008). In experiments they
demonstrated that MOT was more useful as an
inspection technique for novice evaluators that
heuristic evaluation - the evaluators found an equal
number of problems with the two methods, but
problems found with MOT are more serious and
complex to repair, and more likely to persist for
expert users. However, understanding MOT as
a technique for evaluating interfaces appeared to be
difficult.
Another comparative study was proposed in
(Lanzilotti et al., 2011). The study involves novice
evaluators and end users, who evaluated an
e-learning application using one of three techniques:
pattern-based inspection, heuristic evaluation and
user testing. In the study authors show that pattern-
based inspection reduces reliance on individual
skills and permits the discovery of a larger set of
different problems and decrease evaluation cost.
Moreover, the results of study indicated that
evaluation in general is strongly dependent on the
methodological approach, judgement bias and
individual preferences of evaluators. Authors also
state that patterns help to share and transfer
knowledge between inspectors and thus simplify the
evaluation process for novice evaluators.
More experiments on how to improve heuristic
evaluation done by novice evaluators was performed
by Botella, Alracon and Penalver (Botella, Alracon
and Penalver, 2013). They proposed the framework
for improving usability reports for novice evaluators
by combining the classical usability report with
the interaction pattern design. However, they did not
provide any evidence of method usage or its
comparison to other techniques.
3 HEURISTIC EVALUATION
METHOD
Heuristic evaluation is one of the most widely used
methods for application evaluation. While using
the application, an expert checks and marks the
predefined areas in order to note the compliance
with interface design guidelines called also
heuristics and look for potential problems.
3.1 General Description
In heuristic evaluation method, each of those
predefined areas can be divided into several more
detailed sub-areas and be assigned with questions for
the expert to answer while working with an
application.
The main advantage is the method cost, which
does not require representative samples of users,
special equipment or laboratory settings. Moreover,
experts can detect wide range of system problems in
a limited period of time. As the main drawback
studies list the dependency on experts’ skill and the
fact that experience as heuristics are often generic
(Lanzilotti et al., 2011). Other studies list that
heuristic evaluation can lead to situation when many
small and minor usability problems are detected and
improved whereas major usability problems remain
unnoticed (Koyani, Bailey and Nall, 2004).
Contrary to the widespread assumption, experts
usually do not acquire better results in performing
specific tasks in the tested system, as they usually do
not know that system before the testing. But their
expert status is based on their own experience with
different kinds of software. This, as proven by
studies, allows them to perform faster than the
novices (Dillon and Song, 1997) and to spend less
time handling the errors despite making number of
errors comparable to the novice users (Jochen et al.,
1991).
The most known guidelines concerning user
interface are:
Nielsen's heuristics (Nielsen and Molich, 1990);
Gerhardt-Powals’ cognitive engineering
principles (Gerhardt-Powals, 1996);
Weinschenk and Barker classification
(Weinschenk and Barker, 2000);
Connell’s Full Principles Set (Connell, 2000).
3.2 Applied Heuristics
Authors decided to used heuristics that they created
and applied in previous research (Borys, Laskowski
and Milosz, 2013). The proposed heuristics covers
the following areas:
Application interface.
Navigation and data structure.
Feedback, system messages, user help.
Content.
Data input.
ExpertvsNoviceEvaluators-ComparisonofHeuristicEvaluationAssessment
145