which describe distinct and relatively well-defined
aspects of user experience that can be measured
independently.
Questionnaires that measure the user experience
take into account this complexity, since they usually
compute values on different UX scales. A scale
corresponds to a content-delimited quality
characteristic of user experience, e.g. efficiency or
originality. Depending on the questionnaire, different
combinations of quality characteristics are measured.
Standardized questionnaires are not a more or less
random or subjective collection of questions, but
result from a careful construction process. This
process guarantees accurate measuring of the
intended UX qualities. But on the other hand, a
standard UX questionnaire is unable to measure user
experience holistically (Osgood et al., 1978). A
standardized questionnaire accurately measures the
UX scales identified in the constructions such as
stimulation, efficiency, attractiveness, etc.
The method presented in this paper is based on the
User Experience Questionnaire (UEQ) (Laugwitz et
al., 2008) and shows how to interpret the results from
the UEQ by conducting an importance-performance
analysis. We decided to use the UEQ because it is a
well-known UX questionnaire and it is available in
more then 20 languages. The objective of the UEQ is
to allow a quick assessment done by end users
covering a preferably comprehensive impression of
user experience. It allows the users to express
feelings, impressions, and attitudes that arise when
experiencing the product under investigation in a very
simple and immediate way. It consists of 26 items that
are grouped into six scales (Attractiveness,
Perspicuity, Efficiency, Dependability, Stimulation,
and Novelty). Each scale represents a distinct UX
quality aspect.
The UEQ offers various options for interpreting
the data. For example, the scales as well as the
associated items can be interpreted individually. For
each scale, there is also a benchmark that allows
comparison with other data (Schrepp et al., 2017).
Another approach is the importance-performance
analysis (IPA) (Martilla and James, 1977). An IPA
measures customer satisfaction and presents it
graphically so that recommendations for action can be
made. Customer satisfaction is determined by
querying the perceived importance and performance
for a set of attributes. The result is displayed
graphically in a plot. The recommendations for action
are derived from the arrangement in the plot.
In this article, we present a method to interpret the
results from the UEQ by conducting an importance-
performance analysis (IPA).
Section 2 surveys the background and related
work regarding the IPA. Section 3 outlines our
method to interpret the results from the UEQ by
conducting an IPA. Furthermore, we describe a first
study to validate our method. In Section 4 we present
the results of our study. Section 5 discusses the results
of our study.
2 BACKGROUND AND RELATED
WORK
As already described in the introduction, the
importance-performance analysis (IPA) is one way of
graphically representing the relationship between
importance and performance for a set of attributes in
a plot (Martilla and James, 1977).
There is no prescribed list of attributes for
performing an IPA. The list of attributes must be
determined during the concrete study (Martilla and
James, 1977). In the literature, there are already
proposals for selected products, for instance,
Websites for airline companies (Öz, 2012) or Internet
stores (Pokryshevskaya and Antipov, 2013). Another
approach is to extract the items or scale from an
existing questionnaire. Tontini (2016) has taken the
items from the questionnaire e-SERVQUAL and used
them as a set of attributes to evaluate online shopping
sites. Also, there are various ways of creating a list of
attributes.
The measurement of importance and performance
is usually performed by directly putting the attributes
on a seven-point rating scale, one item for
importance, and one item for performance (Abalo et
al., 2007; Azzopardi and Nash, 2013). There are other
methods that derive importance indirectly from the
results of performance (Bacon, 2003), for example,
through multivariate regression analysis (Danaher
and Mattsson, 1994) or a conjoint analysis (Danaher,
1997). This would have the advantage that only one
item would have to be queried for importance and
performance. The disadvantage, however, is the more
reduced data quality (Bacon, 2003). In practice, direct
measurement with two items per attribute has mostly
established itself (Bacon, 2003).
The values from the items for each attribute are
displayed in the IPA plot (Figure 1), where each
attribute is assigned a point. The point is calculated
by the value of the performance (x-axis) and
importance (y-axis). The aim is to derive
recommendations for action for each quadrant. The
recommendation for action is derived from the
relationship between importance and performance