Table 2: Mean values and standard deviations for the UEQ and UEQ-S.
Pragmatic Quality Hedonic Quality Overall
PQ(UEQ) PQ(UEQ-S) HQ(UEQ) HQ(UEQ-S) OV(UEQ) OV(UEQ-S)
Amazon England 1.50 (1.05) 1.54 (1.04) 0.95 (1.08) 1.008 (1.03) 1.32 (1.11) 1.272 (0.93)
Spain 1.03 (1.05) 0.99 (1.02) 1.03 (1.02) 0.990 (0.99) 1.08 (1.04) 0.98 (0.96)
Germany 1.36 (0.91) 1.550 (0.95) 0.73 (0.98) 0.701 (0.97) 1.17 (1.00) 1.126 (0.85)
Skype England 1.06 (1.13) 1.167 (1.12) 0.50 (1.08) 0.403 (1.04) 0.90 (1.15) 0.787 (0.94)
Spain 0.93 (0.07) 1.118 (0.93) 0.77 (1.05) 0.723 (1.02) 0.94 (1.01) 0.911 (0.84)
Germany 0.77 (1.12) 0.997 (1.24) 0.44 (1.05) 0.412 (1.04) 0.68 (1.12) 0.698 (1.03)
5 CONCLUSIONS
The short version UEQ-S of the UEQ is intended to
be used in cases where filling out a complete 26 item
UEQ is not possible. The UEQ-S was designed ac-
cording to a data analytical approach on the basis of
the full UEQ. Some first evaluation studies showed
that the items of the UEQ-S are a good approxima-
tion of the UEQ results for pragmatic and hedonic
quality in the sense that the 4 items of the UEQ-S
are quite good predictors for the mean values of the
12, respectively 8, items of the full UEQ assigned to
these meta-dimensions.
However, some more data are required to get a
deeper understanding of the relation between full and
short version. In future work, the Spanish data set
must be checked to see if it is generally valid for the
Spanish version of the UEQ, since only students took
part. This paper presents the results of three additio-
nal validation studies for different language versions,
which confirm the scale structure of the UEQ-S and
again a reasonable congruity of short and full version
of the questionnaire.
One of the key features of the UEQ is a large ben-
chmark data set. The benchmark helps to interpret
results obtained with the UEQ by a comparison to the
results of other products in the benchmark. It is not
directly possible to recalculate a benchmark for the
UEQ-S, since only the scale means are available for
many of the data points in the UEQ benchmark, i.e.
the raw data are not available due to data privacy is-
sues. This paper showed that due to the good approx-
imation of the metadimensions pragmatic and hedo-
nic quality of the 8 items of the short version, it is
possible to use a natural transformation of the UEQ
benchmark for the UEQ-S.
REFERENCES
Batinic, B., Reips, U.-D., and Bosnjak, M., editors (2002).
Online social sciences. Hogrefe & Huber Publishers,
Seattle.
Boy, G. A. (2017). The Handbook of Human-Machine Inte-
raction: A Human-Centered Design Approach. CRC
Press, Milton, 1st ed. edition.
Escalona, M. J., Lopez, G., Vegas, S., Garc
´
ıa-Borgo
˜
n
´
on, L.,
Garcia-Garcia, J. A., and Juristo, N. (2016). A soft-
ware engineering experiments to value mde in testing.
learning lessons.
Hassenzahl, M. (2003). The thing and i: Understanding
the relationship between user and product. In Blythe,
M. A., editor, Funology, pages 31–42. Kluwer Acade-
mic Publishers, Boston [etc.].
Laugwitz, B., Held, T., and Schrepp, M. (2008). Con-
struction and evaluation of a user experience question-
naire. In Holzinger, A., editor, HCI and Usability for
Education and Work, volume 5298 of Lecture Notes in
Computer Science, pages 63–76. Springer Berlin Hei-
delberg, Berlin, Heidelberg.
Preece, J., Rogers, Y., and Sharp, H. (2015). Interaction
design: Beyond human-computer interaction. Wiley,
Chichester, 4. ed. edition.
Schrepp, M., Held, T., and Laugwitz, B. (2006). The influ-
ence of hedonic quality on the attractiveness of user
interfaces of business management software. Inte-
racting with Computers, 18(5):1055–1069.
Schrepp, M., Hinderks, A., and Thomaschewski, J. (2014).
Applying the user experience questionnaire (ueq) in
different evaluation scenarios. In Marcus, A., edi-
tor, Design, User Experience, and Usability. Theo-
ries, Methods, and Tools for Designing the User Ex-
perience, volume 8517 of Lecture Notes in Computer
Science, pages 383–392. Springer International Pu-
blishing, Cham.
Schrepp, M., Hinderks, A., and Thomaschewski, J. (2017a).
Construction of a benchmark for the user expe-
rience questionnaire (ueq). International Journal
of Interactive Multimedia and Artificial Inteligence,
4(4):40–44.
Schrepp, M., Hinderks, A., and Thomaschewski, J. (2017b).
Design and evaluation of a short version of the user ex-
perience questionnaire (ueq-s). International Journal
of Interactive Multimedia and Artificial Intelligence,
4(6):103.
A Benchmark for the Short Version of the User Experience Questionnaire
377