situation was risky” and item “The system made an
unsafe prediction in this situation” had the lowest
ratings, and item “I would have performed better than
the system in this situation” and item “The system
reacted appropriately in this situation” were rated
highest.
5 DISCUSSION
Dolinek and Wintersberger’s scale was developed
especially for contexts related to artificial intelligence
(AI), while Körber’s scale incorporate aspects of both
AI and general automation. Since most present-day
applications and systems are on the borderline of
these concepts, it is interesting to evaluate whether
both questionnaires are suitable for trust analysis. In
our study, both scales yielded similar results.
Körber’s scale produced somewhat more positive
results than Dolinek and Wintersberger’s and
contains items that are more general in nature. These
items do not refer only to the amount of trust users
have in this particular system but also to the extent to
which users trust automated systems in general,
which is an interesting aspect of this tool. However,
Dolinek and Wintersberger’s scale is shorter and
more specific to AI applications, which are likely to
represent the majority of the applications of the
present and future.
6 CONCLUSIONS
Both scales appear to be suitable for measuring trust.
Therefore, it seems possible to choose which scale to
use based on the time available for responding to the
questionnaire and the specific items that should be
incorporated in the questionnaire depending on the
application under study.
REFERENCES
Ajenaghughrure, I. B., Da Costa Sousa, S. C., & Lamas, D.
(2020). Risk and Trust in artificial intelligence
technologies: A case study of Autonomous Vehicles. In
2020 13th International Conference on Human System
Interaction (HSI) (pp. 118–123). IEEE. https://doi.org/
10.1109/HSI49210.2020.9142686
Ajenaghughrure, I. B., Da Sousa, S. C., & Lamas, D.
(2020). Measuring Trust with Psychophysiological
Signals: A Systematic Mapping Study of Approaches
Used. Multimodal Technologies and Interaction, 4(3),
63. https://doi.org/10.3390/mti4030063
Ajenaghughrure, I. B., Sousa, S. C., Kosunen, I. J., &
Lamas, D. (2019). Predictive model to assess user trust.
In N. Rangaswamy, R. Loganathan, & A. Tiwari (Eds.),
Proceedings of the 10th Indian Conference on Human-
Computer Interaction (pp. 1–10). ACM.
https://doi.org/10.1145/3364183.3364195
Alicke, M. D., Dunning, D. A., & Krueger, J. (Eds.). (2005).
The Self in Social Judgment. Psychology Press.
Alsaid, A., li, m., Chiou, E. K., & Lee, J. D. Measuring
trust: A text analysis approach to compare, contrast, and
select trust questionnaires. PsyArXiv, 2022.
https://psyarxiv.com/5eyur/
Castelfranchi, C., & Falcone, R. (Eds.). (2010). Trust
Theory. John Wiley & Sons, Ltd. https://doi.org/
10.1002/9780470519851
Dolinek, L., & Wintersberger, P. (2022). Towards a
Generalized Scale to Measure Situational Trust in AI
Systems. In ACM CHI Conference on Human Factors
in Computing Systems. ACM.
Geitner, C., Sawyer, B. D., Birrell, S., Jennings, P.,
Skyrypchuk, L., Mehler, B., & Reimer, B. (2017). A
Link Between Trust in Technology and Glance
Allocation in On-Road Driving. In Proceedings of the
9th International Driving Symposium on Human
Factors in Driver Assessment, Training, and Vehicle
Design: driving assessment 2017 (pp. 263–269).
University of Iowa. https://doi.org/10.17077/driving
assessment.1645
Graichen, M. (2019). Analyse des Fahrverhaltens bei der
Annäherung an Knotenpunkte und personenspezifsiche
Vorhersage von Abbiegemanövern [Doctoral thesis].
Universität der Bundeswehr München, Neubiberg.
http://athene-forschung.rz.unibw-muenchen.de/129783
Gulati, S., Sousa, S., & Lamas, D. (2017). Modelling Trust:
An Empirical Assessment. In R. Bernhaupt, G. Dalvi,
A. Joshi, D. K. Balkrishan, J. O’Neill, & M. Winckler
(Eds.), Lecture Notes in Computer Science. Human-
Computer Interaction – INTERACT 2017 (Vol. 10516,
pp. 40–61). Springer International Publishing.
https://doi.org/10.1007/978-3-319-68059-0_3
Gulati, S., Sousa, S., & Lamas, D. (2019). Design,
development and evaluation of a human-computer trust
scale. Behaviour & Information Technology, 38(10),
1004–1015.
https://doi.org/10.1080/0144929X.2019.1656779
Itoh, M. (2012). Toward overtrust-free advanced driver
assistance systems. Cognition, Technology & Work
,
14(1), 51–60. https://doi.org/10.1007/s10111-011-
0195-2
Kinzer, K. (2009). Tweenbots. http://tweenbots.com/
Körber, M. (2019). Theoretical considerations and
development of a questionnaire to measure trust in
automation. In S. Bagnara, R. Tartaglia, S. Albolino, T.
Alexander, & Y. Fujita (Eds.), Proceedings of the 20th
Congress of the International Ergonomics Association
(IEA 2018): Volume VI: Transport Ergonomics and
Human Factors (TEHF), Aerospace Human Factors
and Ergonomics (pp. 13–30). Springer.
Lankton, N. K., McKnight, D. H., & Tripp, J. (2015).
Technology, humanness and trust: Rethinking trust in