Although there have been several educational
games based on the principles of usability, user
experience and learning motivation, there is a need
of further comparison among educational games in
order to gain insights into the best features.
In this work, we present the results of an
evaluation and comparison of two competition-based
ECG. The evaluation was performed by 41 master
students, using a questionnaire to assess usability,
user experience and motivation to learn in ECG. In
order to have homogeneous systems evaluation
criteria, the same group of students evaluated the
two systems at two different points in time.
This paper is organized as follows: Section 2
presents the literature review; section 3 describes the
educational computer games evaluated and
compared in the study; the proposed questionnaire is
described in section 4; section 5 addresses the study
methodology and results; section 6 presents the
results and discussion of the study; and section 7
concludes the paper.
2 RELATED WORK
In HCI, usability and UX are considered similar but
different terms regarding user satisfaction. It is
understood that the system’s functional
characteristics are vital, but the user motivation to
keep using the product is critical as well
(Hassenzahl, 2003; Vermeeren et al., 2010; Lewis et
al., 2013). In fact, they complement each other. User
satisfaction could not be accomplished without
adequate system functionality, and for the user to be
willing to use the system, he/she must be stimulated
to do it. However, there are a few effective methods
to assess UX separately or in combination with
usability.
Many methods and instruments are available to
conduct usability evaluation (Lewis 2013; Lewis et
al., 2013). However, UX is still not being addressed
comprehensively (Vermeeren et al., 2010;
Hassenzahl, 2003). To understand how the user
really feels about a system is important to obtain that
information directly from him/her. Differing from
some usability methods, the use of logging to
evaluate UX could not be fully effective.
In order to know the UX evaluation methods
used in industry and academia, in (Vermeeren et al.,
2010) is described a study conducted with 35
participants of the CHI’09 conference. A total of 33
UX evaluation methods were initially considered.
However, researchers reported that only 15 methods
were evidently considering the hedonic nature of UX
in addition to the pragmatic emphasis of usability.
The paper does not include details of the names of
all the detected instruments. The identified methods
were categorized into seven groups, including lab
studies (individual or by group), field studies (short
term or longitudinal), surveys, expert evaluation and
mixed methods. In this study a mixed method was
implemented, based on the data collected through
individual surveys in a short term field study.
Specific instruments to evaluate the pragmatic
and hedonic characteristics of software are available
in the literature, including: SUMI, QUIS, CSUQ,
SUS, UMUX and UMUX-Lite (Lewis 2013; Lewis
et al., 2013), instruments to measure computer
systems usability; and the AttrakDiff 2 questionnaire
to explicitly evaluate UX (Hassenzahl et al., 2003).
Particularly, the System Usability Scale (SUS)
instrument is one of the most used questionnaires for
usability testing. The SUS is a 10 items
questionnaire (using positive and negative tone),
released about 20 years ago as a reduced version of
the instruments already proposed (Brooke, 1996).
Recently, authors of the Usability Metric for User
Experience (UMUX) (Finstad, 2010; Finstad, 2013)
and UMUX-Lite (Lewis et al., 2013), in
conformance with the ISO definition of usability
(standard 9241), introduced two even shorter
versions. However, in the HCI research field, there
is some polemic regarding reliability, validity, and
sensitivity of these two instruments (Lewis, 2013;
Pribeanu, 2016). In the presented work, in order to
include more specific questions, we opted for
elaborate our own questionnaire items. Similar to
SUS and UMUX, we elaborated an evaluation
instrument considering the constructs usability
(“…achieve specified goals with effectiveness,
efficiency and satisfaction”) and user experience
(“…users' emotions, beliefs, preferences,
perceptions, physical and psychological responses”),
based on the ISO 9241 standard (ISO 9241-11,
1998).
Regarding the learning motivation construct,
proposed in (Satar, 2007) as a new usability measure
for e-learning design, we considered the four
affective learning sub-constructs from the ARCS
Model of Motivational Design: 1) attention, arouse
and maintain interest in the game; 2) relevance,
significant for students’ needs; 3) confidence,
produce positive expectation for successful
achievement; and 4) satisfaction, reinforcement for
effort.
In (Hassenzahl, 2003), it is proposed an
evaluation model that combines UX elements with
functional characteristics (subjective nature of