the UUX of mobile applications, especially inspectors
with no experience in HCI.
We also identified some threats that can affect the
validity of the results, and we tried to mitigate them.
The main are: (i) there could be a training effect if
the training of one of the techniques was of lower
quality than the training of the other, but we con-
trolled this risk by preparing equivalent training for
both groups and with the same examples; (ii) the or-
der of application may have caused bias to students,
however, all participants used both techniques, and
the order of application was reversed on the second
day; we consider the metrics used as a threat (effi-
ciency and effectiveness), however, these metrics are
commonly adopted in experimental studies that eval-
uate usability and UX in applications (Marques et al.,
2019; Nascimento et al., 2016a); and the main threat
was the data collection since we had to conduct the
study remotely due to social isolation, but we tried to
minimize this bias by applying the same study proce-
dure to all classes to extract the data. As future work,
we intend to execute new experimental studies to (a)
evaluate the differences in the results of experienced
(industry) and non-experienced HCI inspectors em-
ploying the two techniques; (b) qualitatively identify
the difficulties, facilities, and improvement sugges-
tions perceived by the participants in using each of
the techniques; and (c) propose a new Usability and
UX evaluation technique, focusing on mobile appli-
cations, considering the limitations identified in this
study.
REFERENCES
Carmines, E. G. and Zeller, R. A. (1979). Reliability and
validity assessment. Sage publications.
Chyung, S. Y., Roberts, K., Swanson, I., and Hankinson,
A. (2017). Evidence-based survey design: The use of a
midpoint on the likert scale. Performance Improvement,
56(10):15–23.
da Silva Franco, R. Y., Santos do Amor Divino Lima, R.,
Paixão, M., Resque dos Santos, C. G., Serique Meiguins,
B., et al. (2019). Uxmood—a sentiment analysis and in-
formation visualization tool to support the evaluation of
usability and user experience. Information, (12).
Guerino, G. C., Silva, W. A. F., Coleti, T. A., and Valen-
tim, N. M. C. (2021). Assessing a technology for usabil-
ity and user experience evaluation of conversational sys-
tems: An exploratory study. In Proceedings of the 23rd
International Conference on Enterprise Information Sys-
tems (ICEIS 2021), volume 2, pages 461–471.
Hassenzahl, M. (2008). User experience (ux) towards an
experiential perspective on product quality. In Proceed-
ings of the 20th Conference on l’Interaction Homme-
Machine, pages 11–15.
ISO25010 (2011). Iso/iec 25010: Systems and software
engineering – square – software product quality require-
ments and evaluation – system and software quality mod-
els.
ISO9241-210 (2011). Iso / iec 9241-210: Ergonomics of
human-system interaction – part 210: Human-centred
design for interactive systems.
Laitenberger, O. and Dreyer, H. M. (1998). Evaluating the
usefulness and the ease of use of a web-based inspec-
tion data collection tool. In Proceedings Fifth Interna-
tional Software Metrics Symposium. Metrics (Cat. No.
98TB100262), pages 122–132. IEEE.
Marques, L., Matsubara, P., Nakamura, W., Wiese, I.,
Zaina, L., and Conte, T. (2019). Ux-tips: A ux evaluation
technique to support the identification of software appli-
cation problems. In Proceedings of the XXXIII Brazilian
Symposium on Software Engineering, pages 224–233.
Marques, L., Matsubara, P. G., Nakamura, W. T., Ferreira,
B. M., Wiese, I. S., Gadelha, B. F., Zaina, L. M., Red-
miles, D., and Conte, T. U. (2021). Understanding ux
better: A new technique to go beyond emotion assess-
ment. Sensors, 21(21):7183.
Nascimento, I., Silva, W., Gadelha, B., and Conte, T.
(2016a). Userbility: a technique for the evaluation of
user experience and usability on mobile applications. In
International Conference on Human-Computer Interac-
tion, pages 372–383. Springer.
Nascimento, I., Silva, W., Lopes, A., Rivero, L., Gadelha,
B., Oliveira, E., and Conte, T. (2016b). An empirical
study to evaluate the feasibility of a ux and usability in-
spection technique for mobile applications. In 28th Inter-
national Conference on Software Engineering & Knowl-
edge Engineering, California, USA.
Nielsen, J. (1994). Heuristic evaluation. Usability inspec-
tion methods.
Rivero, L. and Conte, T. (2017). A systematic mapping
study on research contributions on ux evaluation tech-
nologies. In Proceedings of the XVI Brazilian Sympo-
sium on Human Factors in Computing Systems, pages
1–10.
Sharp, E., Preece, J., and Rogers, Y. (2019). Interaction De-
sign: Beyond Human-Computer Interaction. Addison-
Wesley.
Valentim, N. M. C., Rabelo, J., Oran, A. C., Conte,
T., and Marczak, S. (2015). A controlled experiment
with usability inspection techniques applied to use case
specifications: comparing the mit 1 and the uce tech-
niques. In 2015 ACM/IEEE 18th International Confer-
ence on Model Driven Engineering Languages and Sys-
tems (MODELS), pages 206–215. IEEE.
Venkatesh, V. and Davis, F. D. (2000). A theoretical exten-
sion of the technology acceptance model: Four longitu-
dinal field studies. Management science, 46(2):186–204.
Wohlin, C., Runeson, P., Höst, M., Ohlsson, M. C., Regnell,
B., and Wesslén, A. (2012). Experimentation in software
engineering. Springer Science & Business Media.
An Experimental Study on Usability and User Experience Evaluation Techniques in Mobile Applications
347