
ChatGPT does not have an anthropomorphic avatar to
maintain a neutral and universal appearance, and its
text-only interface allows users to imagine it persona
based on its responses (Liu and Siau, 2023; Nowak
and Rauh, 2005).
The results emphasize the critical necessity of en-
hancing user awareness during interactions with gen-
erative AI, owing to several significant limitations.
Among these, hallucination stands out as a prominent
issue, wherein erroneous information is presented as
factual, which could have a serious impact on the
well-being of individuals, especially vulnerable ones.
Additionally, there are risks associated with the halo
effect, wherein individuals are inclined to trust con-
versational AI’s polished and authoritative language.
The bias problem within generative AI exacerbates
these concerns, as these systems tend to replicate and
potentially amplify biases present in the training data,
resulting in unfair or discriminatory outputs (Milmo,
2024).
Ethical dilemmas arise from the potential misuse
of generated content for malicious purposes, includ-
ing the creation of deepfakes, dissemination of fake
news, or spread of misinformation (Logan, 2024).
Moreover, the poor quality of data used in training
can lead to misleading answers. It is noteworthy
that ChatGPT, for instance, has been trained on data
from web crawling, Reddit posts with three or more
up-votes, Wikipedia, and internet book collections
(Walsh, 2024). Therefore, it is imperative to regu-
late these tools appropriately to mitigate the adverse
consequences of misplaced trust.
5.2 Limitations
This study might have several limitations such as:
• The use of the Wizard of Oz technique, while
effective for simulating generative AI functional-
ity, may not fully replicate real-world interactions
with generative AI tools.
• The participants consisted solely of Master’s stu-
dents, potentially limiting the generalizability of
the findings to other groups with varying educa-
tional backgrounds. Future research could ad-
dress these limitations by including more diverse
participant groups.
• The variation in UI design was limited to the
avatar and text font. Exploring additional design
variables could further enhance understanding of
the relationship between UI design and user trust
in AI systems.
6 CONCLUSION AND FUTURE
PLAN
This study’s findings highlighted that avatars are the
most influential UI element affecting user trust in
generative AI. Results showed also that participants
demonstrated sensitivity to text font variations. Inter-
estingly, the results showed that despite participants
interacting with the same source of outputs, variations
in UI led to differing perceptions of trust, emphasiz-
ing the role of UI design in shaping trust in genera-
tive AI responses. These results emphasize the impor-
tance for designers and developers to exercise caution
when designing UI, guiding users to avoid placing
excessive trust in unregulated generative AI systems.
Users should not be misled by UI design choices into
increasing their trust in such systems.
For future research, we plan to conduct a larger ex-
periment with participants from diverse user groups,
varying in educational background and familiarity
with generative AI. We also aim to explore the nu-
anced interactions between UI design and user trust
in generative AI tools by considering additional UI
elements, such as color, which were not a factor stud-
ied in this experiment as the three UI shared the color
blue.
REFERENCES
Alagarsamy, S. and Mehrolia, S. (2023). Exploring chat-
bot trust: Antecedents and behavioural outcomes. He-
liyon, 9(5).
Atillah, I. E. (31 March 2023). Man ends his
life after an AI chatbot ’encouraged’ him
to sacrifice himself to stop climate change.
https://www.euronews.com/next/2023/03/31/man-
ends-his-life-after-an-ai-chatbot-encouraged-him-to-
sacrifice-himself-to-stop-climate-.
Bach, T. A., Khan, A., Hallock, H., Beltr
˜
ao, G., and Sousa,
S. (2024). A systematic literature review of user
trust in AI-enabled systems: An HCI perspective. In-
ternational Journal of Human–Computer Interaction,
40(5):1251–1266.
Bae, S., Lee, Y. K., and Hahn, S. (2023). Friendly-bot: The
impact of chatbot appearance and relationship style on
user trust. In Proceedings of the Annual Meeting of the
Cognitive Science Society, volume 45.
Baek, T. H. and Kim, M. (2023). Is ChatGPT scary good?
how user motivations affect creepiness and trust in
generative artificial intelligence. Telematics and In-
formatics, 83:102030.
Feuerriegel, S., Hartmann, J., Janiesch, C., and Zschech,
P. (2024). Generative AI. Business & Information
Systems Engineering, 66(1):111–126.
Fowler, G. A. (10 Aug 2023). AI is acting
‘pro-anorexia’ and tech companies aren’t
Exploring the Influence of User Interface on User Trust in Generative AI
713