the application interface pointing out the errors, and
consequently classifying them to the degree of severity.
Although Heuristic Evaluation is frequently used
for usability evaluation, heuristics are used to evaluate
user interfaces for many different domains. For this
reason, many researchers adapt the heuristics to their
application domain. Several authors use an informal
process to develop or adapt usability heuristics and do
not follow an established and systematic protocol for
heuristic assessment.
Our approach presents a different strategy to apply
HE. The proposal provides a checklist that is different
from existing solutions. In our proposal, each heuris-
tic is guided by questions that direct the evaluator to
effectively evaluate the DSL.
Regarding the evaluation of the created HEC, we
performed 7 (seven) interviews with researchers and
professionals on HCI. These interviews were analyzed
using the Inductive Thematic Analysis method,which
resulted in a group of common themes of opinions,
suggestions and ideas discussed in this study. These
results led us to make improvements in the HEC.
Some of the most important questions provided by
the HCI experts were related to the depth and ampli-
tude of the questions and how the evaluators would
use the checklist to perform the heuristic evaluation on
a DSL. Suggestions regarding the checklist’s template,
such as changing the order the columns and providing
the evaluator with a blank box to freely report prob-
lems not yet classified, were discussed and accepted.
Moreover, we also engaged in a deep discussion
around covering all the aspects of Textual and Graphi-
cal DSLs using the Nielsen’s heuristics. We decided
to create more questions for specific heuristics, such
as Flexibility and Efficiency of Use (H7). However, it
must be stated that the checklists created must be seen
by the evaluators as a guidance to perform the inspec-
tion and if they find usability problems not covered by
the content of the checklist, they must describe them
in the provided blank box.
Based on the opinions obtained through the inter-
views, we made some changes to the HEC, which were
suggested by the interviewees. After completing these
changes, we applied the proposed checklist using an
example of use, in order to understand its behavior
when used in a context close to a real one. Thus, this
usage example included (5) participants, who used the
Latex textual DSL. Preliminary results showed that our
HEC helps experienced and non experienced usability
evaluators, but some improvements are needed for non
experienced usability evaluators.
As future work, we aim to apply some Norman
design principles (Norman, 2013) to our proposal,
i.e. affordance, cognitive overload, and visibility. We
still intend the analyze the applicability of the created
checklists in several real scenarios in order to under-
stand how the evaluation procedure will be performed
with those artifacts.
ACKNOWLEDGEMENTS
This study was financed in part by the Coordenação
de Aperfeiçoamento de Pessoal de Nível Superior -
Brasil (CAPES) - Finance Code 001. Avelino F. Zorzo
is supported by CNPq (315192/2018-6).
REFERENCES
Alonso-Ríos, D., Vázquez-García, A., Mosqueira-Rey, E.,
and Moret-Bonillo, V. (2009). Usability: A critical
analysis and a taxonomy. International Journal of
Human-Computer Interaction, 26(1):53–74.
Bariši
´
c, A., Amaral, V., and Goulao, M. (2018). Usabil-
ity driven dsl development with use-me. Computer
Languages, Systems & Structures, 51:118–157.
Braun, V. and Clarke, V. (2006). Using thematic analysis
in psychology. Qualitative Research in Psychology,
3(2):77–101.
Clark, L., Pantidi, N., Cooney, O., Doyle, P., Garaialde, D.,
Edwards, J., Spillane, B., Gilmartin, E., Murad, C.,
Munteanu, C., Wade, V., and Cowan, B. R. (2019).
What makes a good conversation?: Challenges in de-
signing truly conversational agents. In Conference on
Human Factors in Computing Systems (CHI), pages
475:1–475:12. ACM.
Fowler, M. (2005). Language Workbenches: The Killer-App
for Domain Specific Languages?
Fowler, M. (2010). Domain Specific Languages. Addison-
Wesley Professional, 1st edition.
Hermawati, S. and Lawson, G. (2016). Establishing usability
heuristics for heuristics evaluation in a specific domain:
Is there a consensus? Applied Ergonomics, 56:34 – 51.
Luger, E. and Sellen, A. (2016). “like having a really bad
pa”: The gulf between user expectation and experience
of conversational agents. In Conference on Human Fac-
tors in Computing Systems (CHI), pages 5286–5297.
ACM.
Mernik, M., Heering, J., and Sloane, A. M. (2005). When
and how to develop domain-specific languages. ACM
Computing Surveys, 37(4):316–344.
Mosqueira-Rey, E. and Alonso-Ríos, D. (2020). Usability
heuristics for domain-specific languages (dsls). In SAC,
pages 1340—-1343.
Nielsen, J. (1993). Usability Engineering. Morgan Kauf-
mann Publishers Inc., San Francisco, CA, USA.
Nielsen, J. (1994). 10 Usability Heuristics for User Inter-
face Design. Available in: https://www.nngroup.com/
articles/ten-usability-heuristics/.
Heuristic Evaluation Checklist for Domain-specific Languages
47