simulation support and a phone as a simulation ele-
ment) can diminish the sense of immersion. A pro-
posed solution is to project the simulation onto a large
screen, allowing participants to stand in front of it for
a more immersive experience without resorting to Vir-
tual Reality technologies. Feedback from AttrakDiff
and UES-SF questionnaires indicates positive partic-
ipant reception, highlighting emotional and intellec-
tual engagement, particularly with the LLM version.
This suggests a promising avenue for learning simula-
tions and serious games, as intellectual stimulation is
crucial for Experiential Learning, enhancing the edu-
cational quality of the simulations.
Integrating LLMs into simulations introduces
challenges with controlling variables and event trig-
gers, unlike dialogue trees where each node directly
impacts simulation outcomes. A workaround in this
project involved prompting the LLM to suggest ac-
tions, yet interpreting complex, variable-rich LLM re-
sponses remains a hurdle. A second LLM could the-
oretically parse the first’s output, though this raises
issues around its training and increased timings. This
approach complicates the balance between maintain-
ing simulation integrity and leveraging LLMs for dy-
namic, naturalistic dialogue generation. The distinc-
tion between dialogue trees and LLM in dialogue gen-
eration highlights a trade-off between control and nat-
uralness. Dialogue trees offer complete control, en-
suring consistency, while LLMs provide a more natu-
ral interaction but with less predictability. This raises
the question of merging both methods to harness their
respective strengths, suggesting a hybrid approach
where a dialogue tree could potentially guide an LLM
for improved consistency, opening avenues for inno-
vative solutions in dialogue generation.
REFERENCES
Car, L. T., Dhinagaran, D. A., Kyaw, B. M., Kowatsch, T.,
Joty, S., Theng, Y.-L., and Atun, R. (2020). Con-
versational agents in health care: Scoping review and
conceptual analysis. Journal of Medical Internet Re-
search, 22:e17158.
Ding, N., Qin, Y., Yang, G., Wei, F., Yang, Z., Su, Y.,
Hu, S., Chen, Y., Chan, C.-M., Chen, W., et al.
(2023). Parameter-efficient fine-tuning of large-scale
pre-trained language models. Nature Machine Intelli-
gence, 5(3):220–235.
Hassenzahl, M., Burmester, M., and Koller, F.
(2003). AttrakDiff: Ein Fragebogen zur Messung
wahrgenommener hedonischer und pragmatischer
Qualit
¨
at, pages 187–196. B. G. Teubner.
Jain, S. M. (2022). Hugging Face, pages 51–67. Apress.
Kerly, A., Hall, P., and Bull, S. (2007). Bringing chatbots
into education: Towards natural language negotiation
of open learner models. Knowledge-Based Systems,
20:177–185.
Monaco, P.-B., Backlund, P., and Gobron, S. (2024). The
negotiator: Interactive hostage-taking training simula-
tion. In 14th International Conference on Simulation
and Modeling Methodologies, Technologies and Ap-
plications (SIMULTECH 2024). SCITEPRESS.
Monaco, P.-B., Villagrasa, D., and Canton, D. (2023).
The negotiator. In Gamification and Serious GameS
(GSGS’23), pages 94–97. HES-SO.
O’Brien, H. L., Cairns, P., and Hall, M. (2018). A practi-
cal approach to measuring user engagement with the
refined user engagement scale (ues) and new ues short
form. International Journal of Human-Computer
Studies, 112:28–39.
Padilla, J. J., Lynch, C. J., Kavak, H., Evett, S., Nelson,
D., Carson, C., and del Villar, J. (2017). Storytelling
and simulation creation. In 2017 Winter Simulation
Conference (WSC), pages 4288–4299. IEEE.
Park, J. S., O’Brien, J. C., Cai, C. J., Morris, M. R., Liang,
P., and Bernstein, M. S. (2023). Generative agents:
Interactive simulacra of human behavior. Proceedings
of the 36th Annual ACM Symposium on User Interface
Software and Technology.
Shao, Y., Li, L., Dai, J., and Qiu, X. (2023). Character-
LLM: A trainable agent for role-playing. In Bouamor,
H., Pino, J., and Bali, K., editors, Proceedings of
the 2023 Conference on Empirical Methods in Nat-
ural Language Processing, pages 13153–13187, Sin-
gapore. Association for Computational Linguistics.
Zou, A., Wang, Z., Carlini, N., Nasr, M., Kolter, J. Z., and
Fredrikson, M. (2023). Universal and transferable ad-
versarial attacks on aligned language models. ArXiv,
abs/2307.15043.
APPENDIX
• Surveys: Raw Data and Test Protocol
https://drive.google.com/drive/folders/
1n8QGcq6Jvid82Q1erJp YXv8eLv7kVSp?
usp=sharing
Interactive Storytelling Apps: Increasing Immersion and Realism with Artificial Intelligence?
257