Exploring Healthcare Virtual Simulation Modality Preferences an
Interprofessional Learning Experience
Lorene Cobb
a
, Leslie Rippon
b
, Marcia Downer, Lauren Snowdon, Alicia McGregor,
Natalie Neubauer, Lisa Sheikovits, Angela Lis and Genevieve Pinto Zipp
c
School of Health and Medical Sciences, Seton Hall University, 400 South Orange Avenue, South Orange, U.S.A.
Keywords: Virtual Reality, AI Avatar, Virtual Simulated Patients, Healthcare Simulation, Interprofessional Education.
Abstract: Background: Various simulation platforms are available to employ a trial-and-error teaching method via
artificial experiences to engage healthcare professional students in activities reflecting real-life experiences
without risks. Objective: To understand faculty preferences regarding two different types of virtual simulation
platforms. Methods: Upon completing two 30-minute avatar-based simulation experiences employing a
patient case under two different simulation platforms (A=AI generated avatar, B=Live actor avatar)
participants completed online surveys including the Interprofessional Collaborative Competency Attainment
Scale (ICAAS), demographic information, sliding scale to assess perception of verbal and nonverbal
communication, and open question items assessing perception of communication, perceptions in avatar type
preference, and if the experience promoted confidence, ability and knowledge. Sample: This pilot study
consisted of 9 faculty members from various allied health professions. Results: Participants preferred the Live
actor avatar over the AI avatar experiences for verbal communication and authenticity of emotions. Following
the Live avatar simulation, participants reported improved perception of confidence, ability, and knowledge
necessary for interprofessional teamwork. Conclusion: The small sample size may affect the generalizability
of the results, participants perceived value to both types of avatar patient experiences with the perceptions
more favourable for simulating patient centred interactions with the live actor avatars.
1 INTRODUCTION
As health professions educators it is imperative that
we prepare students to effectively meet the needs of
today’s ever-changing healthcare landscape.
Academic institutions must prepare health
professional students to be ready to address these
needs upon entering the workforce. To meet this
challenge, health professional educators continually
infuse innovative and effective teaching methods and
evidence based instructional materials into the
academic environment to ensure that students are
ready for person-centered entry-level practice. Not
surprising academic institutions are turning to
simulation-based platforms as a teaching modality
employing a trial-and-error method of teaching and
learning via artificial experiences that engage learners
a
https://orcid.org/
0009-0005-3581-157X
b
https://orcid.org/0009-0000-1387-4215
c
https://orcid.org/0000-0003-0683-0673
in activities that reflect real-life conditions without
risk taking consequences (Zhai, X. et al. 2021).
Traditionally, health profession programs have
infused individualized simulation experiences to
promote and assess students' understanding of
complex clinical scenarios via a hands-on active
learning approach. Simulated scenarios which
incorporate students from across professional
disciplines have emerged over the past several years
to provide a unique opportunity to merge theory with
practice by highlighting the interdependency that
exists amongst and between health professionals
when providing person centered team-based care
(Clapper, 2010). The nature of such diverse, in-
person interprofessional simulation experiences have
been found to spontaneously enhance student
communication skills and promote development of
respect for different value systems through effective
890
Cobb, L., Rippon, L., Downer, M., Snowdon, L., McGregor, A., Neubauer, N., Sheikovits, L., Lis, A. and Zipp, G. P.
Exploring Healthcare Virtual Simulation Modality Preferences an Interprofessional Learning Experience.
DOI: 10.5220/0013500400003932
Paper published under CC license (CC BY-NC-ND 4.0)
In Proceedings of the 17th International Conference on Computer Supported Education (CSEDU 2025) - Volume 1, pages 890-898
ISBN: 978-989-758-746-7; ISSN: 2184-5026
Proceedings Copyright © 2025 by SCITEPRESS Science and Technology Publications, Lda.
personal interactions (Rider, Kurtz, Slade, et al,
2014). While there are numerous benefits of in-
person simulation experiences, including the
development of leadership and teamwork (Endacott
et al., 2014), improved decision making and critical
thinking (Rhodes and Curran, 2005), clinical skills
and clinical performance benefits (Alinier et al.,
2006), enhanced patient deterioration management
(Cooper et al., 2012), and situation awareness
improvement (Bogossian et al., 2014) have been
noted in the literature. Additoinally, notable barriers
including lack of time, resources, financial cost, and
workload issues (Al-Ghareeb, Cooper, 2016) have
also been reported.
As an alternative to traditional, in-person
simulations, active learning using virtual reality
platforms or gaming platforms, have been employed
as an environment for meaningful educational
scenarios. Virtual reality (VR) has demonstrated
usability as a teaching modality to enhance
interprofessional collaboration and communication
(Qiau et al., 2021). In general, diverse, virtual
interprofessional simulation learning experiences
have been found to spontaneously enhance student
communication skills and promote the development
of respect for different value systems through
effective personal interactions (Rider, Kurtz, Slade, et
al, 2014).
The Perception –Action Theory which provides a
framework for understanding how individual
preferences influence learning processes within the
cognitive system was used as a lens to guide the
study. This theory posits that perception and action
are interdependent, forming a continuous feedback
loop that shapes how we interact with and learn from
our environment (Biwer et al. 2020). The continuous
feedback loop between perception and action allows
learners to adapt their strategies based on the
outcomes of their actions. This adaptability is crucial
for personalized learning, where students adjust their
approaches based on what works best for them (Fujii
2024). Students who are given the autonomy to
choose their learning methods and environments have
been noted to perform better academically; thus,
accommodating students learning preferences can
lead to more effective, meaningful learning
experiences (Biwer et al. 2020, Fujii, 2024).
To date, limited evidence is available
differentiating amongst the various simulation-based
platforms' impact on learning. This article presents
pilot data exploring faculty preferences regarding two
different types of simulation platforms used to
promote student learning, one using AI generated
avatar patients and the other using Live actor avatar
patients. Recognizing this interdependency between
perception and action, in this study we sought to
explore different simulation-based learning
experiences and individual perceptions towards them.
We believe findings from this pilot project will
further inform the development of meaningful
simulation-based learning experiences. Additionally,
given the limited evidence supporting avatar
simulation-based learning experiences’ ability to
mimic real-life scenarios, the accuracy of the case
portrayal and verbal and nonverbal communication
authenticity will be explored in both the AI generated
avatar patient and the Live Actor avatar patient
scenarios.
2 INTERVENTIONS
In this pilot study two distinct VR platforms, with
different ways of animating avatars, were utilized to
explore preference. In both cases the same patient
information was used in the development of the
virtual patients. The patient was, CJ Williams, a
middle-aged man, who has been seeing a team of
interprofessional healthcare providers for balance,
coordination, memory issues, and has recently
received an unexpected diagnosis of multiple
sclerosis. The learning objectives for the scenario are
for the team of healthcare professionals to work as a
team and counsel CJ regarding the unexpected
diagnosis and obtain necessary information from CJ
to create a comprehensive treatment plan to meet his
needs.
2.1 Live Actor Avatar
The first VR Platform, ENGAGE
TM
(Willmington, DE,
USA), utilizes live actor VR avatars to support human
actors who wear VR headsets to bring avatars to life
(Figure 1). The actor performed in a virtual
environment, and their basic head, eye, and arm
movements were captured in real-time to animate the
avatars using the Meta Quest Pro VR headset. The
same actor portrayed the avatar in all sessions. The
participants met in a conference room on campus and
interacted with the avatar in the ENGAGE
TM
platform
using one computer to perform the telehealth
conference call.
Exploring Healthcare Virtual Simulation Modality Preferences an Interprofessional Learning Experience
891
Figure 1: Live Actor Avatar Interface on EngageXR
TM
.
2.2 AI Avatar
The second VR platform, VICTORY XR
TM
(Davenport, IA, USA)
, which was used to develop the
AI avatar (CJ) interface and utilizes artificial
intelligence (AI) to animate avatars using the
ChatGPT 4.0 text to speech model. Internet download
speeds of over 50mbs and upload speeds over 15mbs
are required for operation of the AI avatar. CJ (Figure
2) was programmed to be pleasant and agreeable,
have knowledge of the signs and symptoms of
multiple sclerosis, and was given the same case
specific information as the Live Actor avatar. Avatar
graphics include simple head, face, eye and hand
movements that accompany the speech. While the AI
is processing a response CJ adopts a thinking posture
with his head down and hand on their chin, the avatar
takes anywhere from 6-10 seconds to respond. The
participants met with the avatar on campus in the
same conference room used for the Live Avatar.
Participants gathered and utilized one computer using
a direct web link to the telehealth session on device.
Figure 2: AI Avatar Interface on VictoryXR
TM
.
3 METHODS
This study employed an embedded (QUAL/Quan)
counterbalanced mixed method approach (Creswell
et al., 2011) and University IRB approval was
obtained. Simulation best practices, as reported by
Violata et al., 2023, relating to outcomes, objectives,
simulation design and facilitation of a participant pre-
briefing and debriefing were included in this study.
Potential subjects were identified through
association with the university’s professional
healthcare and allied healthcare programs. Potential
subjects were sent the approved letter of solicitation
via email. Individuals that indicated interest in
participating in the study were sent an approved
informed consent. Once the informed consent was
received participants were sent information on
learning experience date for the 60 minutes on
campus commitment, a 20-minute online survey, and
15-minute case specific information designed to
prepare them for the simulated interaction. Subjects
were required to complete the online survey and the
case specific information prior to the on-campus
session.
All potential participants who agreed to complete
15-minute prep work, participate in the 60- minute in
person learning experience and complete the 20-
minute online post survey were invited to self-select
participating in one of two interprofessional group
sessions based upon their availability. Groups 1 (day
1 participants) and 2 (day 2 participants) were then
randomly assigned into either Case A (AI Avatar) and
Case B (Live actor Avatar) first. The simulated patient
experience order was counterbalanced and groups
completed both experiences on the same day in
alternate order, a rest period was provided between
sessions to minimize the potential for cognitive
fatigue.
On the scheduled testing day subjects reported to a
specific room on campus at a specified time. Upon
arrival, participants were assigned a subject number
by a member of the research team. The participants
were given 10 minutes to re-orientate the case
information and discuss the case with their IPE team
members. The participants had 30 minutes to complete
the simulated patient interaction. After the first
session, subjects completed a post simulation survey
(15-20 minutes) and took a scheduled 15-minute rest
break. Participants then engaged in the second 30-
minute patient simulation using the same case but
alternate simulation modality and completed the same
15- 20-minute post experience survey. The survey was
distributed via an online Qualtrics link containing
demographic questions, the revised Interprofessional
ERSeGEL 2025 - Workshop on Extended Reality and Serious Games for Education and Learning
892
Collaborative Competency Attainment Scale
(ICASS), six questions assessing their perception of
communication and three open-ended questions
exploring perceptions regarding their preferences with
avatar learning and if the experience (AI vs Live)
promoted their confidence, ability, and knowledge.
Descriptive statistics and frequencies were run on
all quantitative data collected. Differences between
subjects' responses for the ICAAS between simulation
rounds (AI avatar verse live actor avatar) were
assessed with non-parametric Wilcoxen signed ranks
test. Differences between the perception of
communication scale between simulation rounds (AI
avatar verse live actor avatar) were assessed using a
paired t-test and effect sizes were estimated with a
Cohen’s d. For the qualitative analysis, the PI
manually decoded and encoded the responses from the
survey's open-ended questions. The process of coding
employed first cycle coding practices described by
Saldana (2016). Specifically, the PI used in-vivo
coding, or direct quotes from the participants and
descriptive coding of brief phrases or words. Codes
were then arranged into categories. The PI created a
data codebook that was provided to the co-PIs for their
independent review of proposed codes, and categories.
Once a 100% intercoder agreement was established
for the codes and categories, the PI and co-PIs
generated consensus driven thematic analysis
statements addressing each of the research questions.
Once the PI and co-PIs analysed both data sets, they
converged both the quantitative and qualitative data to
create a better understanding of the participants'
responses.
3.1 Instruments
The learning experiences generated by the AI and live
Avatars were evaluated post interaction using the
validated revised ICASS, which is 21-item
questionnaire intended to measure interprofessional
communication and collaboration on a 5-point Likert
scale (1=poor, 2=fair, 3=good, 4= very good, and 5=
excellent), which has been used in similar avatar
simulations (Rippon et al., 2023). The subject
perception of the authenticity of the avatar
communication was evaluated using a 6-item survey,
created to assess perception of authenticity of verbal
and non-verbal communication, and was rated on a
sliding scale (0=not authentic to 100= completely
authentic). The perception of avatar communication
underwent face validity. Speech Language Pathology
(SLP) faculty were included in the face validation and
deemed all items pertaining to verbal and non-verbal
communication were important to include and easy to
assess. The 6-items demonstrated strong internal
consistency a priori = .951) and moderate inter-
rater reliability with intra-class correlation (ICC)
average measures (ICC= .745, p<.001). Items
pertaining to verbal communication (n=3) had a
strong inter-item correlation (r= .837-.877) and items
pertaining to non-verbal communication (n=3) had a
moderately strong inter-item correlation (r=.712-
.774).
3.2 Sample
There were 9 participants enrolled in this pilot study
(N=9). All were faculty in the health and allied health
professions (Athletic Training= 2, Physician
Assistant=1, Physical Therapy=3, Occupational
Therapy=1, Speech Language Pathology=2), with
over 2 years of teaching experience, and all (n=9) had
previous experience with patient care. Of the 9
participants 2 identified as male and 7 identified as
female, were between the ages of 35-55, and with no
formal training in VR simulation. Subjects self-
selected into group 1 (n=3) and group 2 (n=6).
4 RESULTS
All 9 participants completed both the AI avatar and
Live actor avatar interactions on the same day. There
were technical issues with the Wi-Fi during day one
of testing, which required the Live avatar encounter
to be delayed. Additionally, there were technical
issues with the AI avatar sessions, the system stopped
responding early in one encounter and the encounter
needed to be stopped and restarted. After restarting
the participants were able to complete the session in
its entirety.
4.1 Quantitative
For the AI avatar interaction subjects had a mean
perception of the avatar's verbal communication, with
100 indicating the highest level of authenticity, of
47.2 +33.9, a mean perception of the avatar
responding appropriately to questions of 56.4 +33.8,
and a mean perception of displaying appropriate
vocal characteristics (such as tone, rate of speech and
loudness) of 44.9 +27.9.
For the AI avatar interaction participants had a
mean perception of the avatar's non-verbal
communication of 47.2 +33.9, a mean perception of
displaying authentic facial responses of 31.8 +24.8,
and a mean perception of displaying authentic
emotions of 35.5 + 27.0.
Exploring Healthcare Virtual Simulation Modality Preferences an Interprofessional Learning Experience
893
For the Live avatar interaction participants had a
mean perception of the avatar's verbal
communication, of 97.8 + 4.8, a mean perception of
the avatar responding appropriately to questions of
98.1 +2.6, and a mean perception of displaying
appropriate vocal characteristics (such as tone, rate of
speech and loudness) of 91.2 + 15.4. For the Live
avatar interaction participants had a mean perception
of the avatar's non-verbal communication of 63.6
+20.2, a mean perception of displaying authentic
facial responses of 59.1 +15.9, and a mean perception
of displaying authentic emotions of 84.9 +16.8.
For analysis of the difference in perception of the
avatar communication, due to the small sample size
both non-parametric and parametric t-tests were run.
Since the data was normally distributed and there was
a large effect size the parametric results are being
reported. Participants reported significant differences
in the authenticity of the AI and the Live avatar
communication with a paired t-test analysis.
Participants had a 41.7 +34.7 higher perception of the
Live Avatar responding appropriately to questions
(t=3.792, p=.004, 95% CI 16.8-66.5) and avatar
modality had a strong effect size (d=1.199).
Participants had a 27.3 +25.9 higher perception of the
Live Avatar displaying authentic facial responses
(t=3.331, p=.009), with avatar modality having a
strong effect size (d=1.053). Participants had a 49.4
+35.9 higher perception of the Live Avatar displaying
authentic emotions (t=4.341, p<,.001, 95% CI 23.6-
75.1), with avatar modality having a strong effect size
(d=1.373). Subjects had a 46.3 +29.6 higher
perception of the Live Avatar displaying appropriate
vocal characteristics (t=4.934, p<.001, 95%CI=25.1-
67.5), with avatar modality having a strong effect size
(d=1.560).Participants had a 50.6 +37.3 higher
perception of the Live Avatar displaying authentic
verbal communication (t=4.290, p<.001, 95%CI=
23.9-77.3) with avatar modality having a strong effect
size (d=1.357). Subjects had a 34.9 +32.0 higher
perception of the Live Avatar displaying authentic
non-verbal communication (t=3.445, p=.004,
95%CI= 11.9-57.8) with avatar modality having a
strong effect size (d=1.090).
Participants reported some differences between
the learning experiences, AI and Live Avatar, based
on the ICAAS. A Wilcoxen signed ranks test
indicated, compared to before the learning activities,
subjects had no significant difference between
perception of their ability to collaborate
interprofessional (p=.121). However, participants did
report, after the Live Avatar simulation, a
significantly higher perception of their ability to;
actively listen to IP team members’ ideas and
concerns (z=2.041, p=.041), provide constructive
feedback to IP team members during (z=1.994,
p=,046), learn with, from and about IP team members
to enhance care (z=2.410, p=.016), identify and
describe their abilities and contributions to the IP
team (z=2.428, p=.015), be accountable for their
contributions to the IP team (z=2.32, p=.026),
recognize how others’ skills and knowledge
complement and overlap with their own (z=2.157,
p=.031), use an IP team approach with the patient to
assess the health situation (z=2.379, p=.017), use an
IP team approach with the patient to provide whole
person care (z=2.716, p=.007), include the
patient/family in decision making (z=2.200, p=.028),
address conflict in a respectful manner (2.165,
p=.030), and develop an effective care plan with IP
team members (z=2.041, p=.041). There was no
significant difference between perception of their
ability following the AI and Live Avatar simulation
to; promote effective communication among
members of the IP team (p=.053), express their ideas
and concerns in a clear concise manner (p=.579), seek
out IP team members to address issues (p=.149), work
effectively with IP team members to enhance care
(p=.057), understand abilities and contributions of IP
team members (p=.055), actively listen to the
perspectives of IP team members (p=.141), take into
account the ideas of IP team members (p=.071), and
negotiate responsibilities with overlapping scopes of
practice (p=.123).
4.2 Qualitative
Table 1 provides the participants in vivo codes and
associated categories that emerged from responses to
the question asking them to share, their thoughts
regarding the AI Avatar Based Virtual Learning
Experience just completed and its impact on their
skills as a healthcare professional. Upon reviewing
the categories that emerged following participation in
the AI specific case scenario the thematic analysis
statement is proposed, faculty perceived that AI
Avatar Based Virtual Learning Experience promoted
collaboration amongst professionals, provided and
opportunity for practice, and assisted in supporting
person centred care practice. However, technical
issues were present that negatively impacted the
learning experiences. Ultimately, the AI avatar
scenario appeared less realistic.
Table 2 provides participants in vivo codes and
associated categories emerging from responses to the
question asking them to share, their thoughts
regarding the LIVE Avatar Based Virtual Learning
Experience
and its impact on their skills as a
ERSeGEL 2025 - Workshop on Extended Reality and Serious Games for Education and Learning
894
Table 1: Participants’ perceptions regarding AI Avatar Based Virtual Learning Experience.
Participant Please share with us your overall thoughts regarding the Avatar Based Virtual Learning Experience you
jus
t
completed and its impact on you
r
skills as a healthcare professional. In vivo codes
Category
F2 I thought it was a great experience to collaborate with other team members. It may be beneficial as a
learning experience to support collaboration and advocacy for an individual's profession. It also assists in
looking a
t
the whole person rathe
r
than jus
t
the deficits my area may address
Collaboration
Whole
person
F4 This is great for healthcare students to practice interviewing and developing their clinical reasoning and
clinical decision making
Promotes
practice
F5 There was initial discomfort in the interaction. It was difficult to navigate the "humanistic" aspect that is
often associated in these meetings.
Technical
issues
F1 It is valuable for sure but has some limitations. Our group had some strong OT and SLP that have more
recent clinical experience than I have so I really let them run with things. We didn't have any conflict as
a provider group. I observed quite a bit more than participated as I wanted to "stay in my lane".
Collaboration
F10 This experience was challenging, and I don’t feel like I gained a lot from it. It didn’t have a great impact
on me as a professional because communication breakdowns were frequent, and it was challenging to
navigate individually and as a team.
Technical
issues
Limited
impact
F9 This type of avatar was less realistic, and it was difficult to keep the flow of the conversation Technical
issues
Less realistic
F6 Having the AI avatar was much more difficult to interact with. Responses were delayed and the ability
for the team to interact was limited. The avatar cut us off, so our answers were limited to how much it
allowed us to speak
Technical
issues
Less realistic
F11 Due to the design of this avatar, the interactions did not feel authentic. There were awkward pauses,
and the conversation did not flow in a realistic manner.
Technical
issues
Less realistic
F7 The AI felt very robotic. His responses, both verbal and non-verbal, were no realistic. He was pausing a
lot and taking over the healthcare team. This inhibited the ability of healthcare members to collaborate.
There was a feeling of trying to rush through because it didn’t feel real
Technical
issues
Less realistic
F8 AI Needs much improvement Technical
issues
Table 2: Participants’ perceptions regarding LIVE Avatar Based Virtual Learning Experience.
Participant
Please share with us your overall thoughts regarding the Avatar Based Virtual Learning Experience
you just completed and it's impact on your skills as a healthcare professional
Category
F5 I felt that the use of VR enabled a greater humanistic aspect that made it easier for us to interact
both as a team and as 1:1 clinician with the patient
Realistic
F1 I liked this better as the patient was very realistic. Changes in voicetoneareimportant to
understanding the patient
Realistic
F2 The sim allowed me to confront a more genuine patient experience where not all my ideas would be
well received by the patient and forced me to offer a variety of solutions. It also encouraged patient
therapist collaboration.
Realistic
Collaboration
F4 This is a great way for students to practice interview and the clinical decision making Promotes
practice
F10 It was interesting, and the conversation flowed very easily. I appreciated hearing from my colleagues
and how they approach CJ differently and similarly to me. It was a great learning opportunity.
Communication
F6 I thought it went really well. I liked the overall mannerisms and discussion Communication
Realism
F8
F9 Excellent, the software was very interactive and facilitated a logical flow of ideas. Communication
F11 This was my first interaction with CJ and felt that it was a great way to allow for interdisciplinary
work. Engaging in IP work enhances clinical skills and pushes your knowledge and communication
abilities. The Avatar Based experience allowed for a real-life simulation of a team meeting.
Communication
Realistic
F7 The client was extremely realistic. It felt like talking to a real patient. The responses were very real.
The questions were real. The client's tone of voice and the way they sounded unsure felt personal
rather than robotic.
Realistic
Exploring Healthcare Virtual Simulation Modality Preferences an Interprofessional Learning Experience
895
healthcare professional. Upon reviewing the
categories that emerged following participation in the
LIVE specific case scenario the thematic analysis
statement is proposed, faculty perceived that LIVE
Avatar Based Virtual Learning Experience promoted
collaboration and communication among
professionals, provided an opportunity for practice,
and was realistic to person centred care.
5 DISCUSSION
This mixed method study examined participants’
perceived preferences of interactions with AI avatar
patients and Live Avatar patients, learning
experiences with the AI avatar patients and live
Avatar patients, and perceived differences of the
participants’ perceptions on overall communication
with both the AI and live Avatar patients. Interactions
were assessed according to verbal communication
when interacting with the AI and live avatars
respectively. Participants verbal communication
with the live avatar was preferred over verbal
communication with the AI avatar. Study participants
reported that the live avatar responded in a “socially
appropriate” manner to questions and displayed the
capability to vary vocal characteristics (tone, rate of
speech and volume of speech) that assisted to create
a realistic, patient conversation when compared to the
verbal interactions with the AI avatar. Additionally,
during the live avatar interactions, participants
perceived the avatar as displaying authentic emotions
and correspondingly reported the live avatar to
display authentic verbal and non-verbal
communication as well.
The avatar portrayed by the live actor offer a level
of realism because they mirror the movements and
utilize the voices supplied by real people. This
approach captures some nuances of human behaviour,
making interactions feel more genuine and
emotionally resonant (Hadhazy, 2022). Additionally,
live actor avatars provide a layer of emotional depth
and convey a wide range of emotions, which is crucial
for applications that require deep emotional
engagement, such as virtual therapy or immersive
storytelling (Hadhazy, 2022).
While AI avatars can perform a wide range of
actions without human intervention, (Kyrlitsias, et al.,
2022) and through the ability to react to
conversational inputs are thought to converse in a
natural and humanistic manner (Javaid et al., 2023)
The use of text to speech technology and the difficulty
of integrating Social Emotional Regulation (SER)
systems limits the ability of the avatar to perceive and
integrate non-verbal information in speech (Wani et
al., 2021) and respond appropriately.
Additionally, participants reported that after the
live avatar experience, they displayed significantly
improved perceptions of skills necessary for effective
interprofessional (IP) teamwork. Specifically,
participants reported perceived improved skills and
abilities when working with their IP colleagues
including: active listening, providing constructive
feedback , the ability to learn with, to, from and about
other professionals on the team so to enhance patient
care, describe their individual contributions to IP
care, be accountable for their individual contributions
to the IP team and use a team approach to develop a
comprehensive, person and family centred plan of
care.
This study findings further support and extend the
recent research examining simulation modalities in
healthcare education. Live avatars achieved
substantially higher authenticity ratings in verbal
communication and emotional expression. However,
this should not discount the utility of AI-based
simulation. Research examining AI Virtual Simulated
Patients (AI-VSP) found high acceptance rates
among diverse healthcare students (84-93%
recommendation rates), suggesting AI can serve as a
valuable supplementary training tool (De Mattei et
al., 2024; Lanza-Postigo et al., 2024). Therefore, the
use of AI avatars may be advantageous when more
traditional forms of simulation are not possible, or the
focus of the learning activity is skill development
requiring the capability to repeatedly practice the
same scenario.
Building on these findings, a 2024 study by
Vogelsang et al. on simulated learning experiences
involving immersive virtual reality has revealed
significant student improvements in self-efficacy,
particularly for specific clinical scenarios like
managing aggressive behaviours in dementia care.
Specifically in this study, the VR intervention group
showed statistically significant improvements
compared to controls, both immediate post-
intervention and following clinical rotations.
A critical theme emerging across recent studies,
and implied in this study, is the role of simulation in
developing socioemotional competencies. A recent
systematic review identified communication (34.4%)
and self-efficacy (30.5%) as the most frequently
trained skills during simulation experiences (Lanza-
Postigo et al., 2024). This aligns with findings from
both the AI avatar and VR studies, where
improvements in interprofessional communication
and self-confidence were consistently observed (De
Mattei et al., 2024; Vogelsang et al., 2024). While
ERSeGEL 2025 - Workshop on Extended Reality and Serious Games for Education and Learning
896
standardized patients (28.4%) and high-fidelity
simulation (26.1%) remain the most prevalent
modalities for socioemotional skills training (Lanza-
Postigo et al., 2024), our work and those of others in
VR, suggests emerging technologies like AI and VR
should complement, not replace, traditional
approaches. Specifically, AI and VR can reduce some
of the notable barriers including lack of time,
resources, financial cost, and workload issues (Al-
Ghareeb et al., 2016, Cooper etal., 2016) to
simulation in healthcare. The use of VR allows for
creation of virtual environments that may be costly to
reproduce in the physical world, varied physical
appearance, and for institutions without dedicated
simulation spaces to facilitate these interactions.
Learning objectives and resources need to be
considered when selecting VR simulation
technology. Based upon the results of this study, Live
actor avatar simulations were perceived to foster a
more collaborative and patient-centred care approach.
However, AI avatar experience may be preferable
when the learning experience is focused on skill
introduction, development, repeated practice and
scalability.
5.1 Limitations and Future Research
Limitations of this study included its small sample
size, which may affect the generalizability of the
results, and technical issues surrounding the AI avatar
experience which may have impacted participant’s
ability to achieve the learning outcomes. Although
the study employed randomization and
counterbalancing, there is always a potential for
sequencing effects and participant fatigue.
Additionally, given that the primary focus was to
assess individual perceptions, behavioural tracking
was not employed, thus may lead to social desirability
bias. Future research should focus on assessment of
engagement levels, speech analysis and avatar
response time. Future research should also employ a
longitudinal study to examine the perceptions of
interprofessional team skills post-simulation and
assess the long-term learning effects in healthcare
students.
6 CONCLUSIONS
This study demonstrated participants in virtual
simulation experiences preferring the bidirectional,
authentic communication offered by the live actor AI
avatar patients when compared to the AI driven avatar
patients. While the sample size was small, the large
effect size demonstrated a perceived value to both
types of avatar patient experiences with the
perceptions more favourable for simulating patient
centred interactions with the live actor avatars.
This synthesis suggests a future where
traditional and emerging simulation modalities work
in concert, each addressing specific learning
objectives while collectively providing
comprehensive preparation for clinical practice. The
challenge for educators lies in thoughtfully
integrating these approaches to maximize learning
outcomes while managing faculty abilities, student
learning preferences, educational and environmental
resource constraints.
REFERENCES
Al-Ghareeb, A. Z., & Cooper, S. J. (2016). Barriers and
enablers to the use of high-fidelity patient simulation
manikins in nurse education: An integrative review.
Nurse Education Today,36,281–286.
https://doi.org/10.1016/j.nedt.2015.08.005
Alinier, G., Hunt, B., Gordon, R., & Harwood, C. (2006).
Effectiveness of intermediate-fidelity simulation
training technology in undergraduate nursing
education. Journal of Advanced Nursing, 54(3), 359–
369. https://doi.org/10.1111/j.1365-2648.2006.03810.x
Biwer, F., Oude Egbrink, M. G., Aalten, P., & De Bruin, A.
B. (2020). Fostering effective learning strategies in
higher education–a mixed-methods study. Journal of
Applied Research in Memory and Cognition, 9(2), 186-
203.
Bogossian, F., Cooper, S., Cant, R., Beauchamp, A., Porter,
J., Kain, V., Bucknall, T., Phillips, N. M., &
FIRST2ACT Research Team. (2014). Undergraduate
nursing students’ performance in recognizing and
responding to sudden patient deterioration in high
psychological fidelity simulated environments: An
Australian multi-centre study. Nurse Education Today,
34(5),691–696.
https://doi.org/10.1016/j.nedt.2013.09.015
Clapper, T. C. (2010). Beyond Knowles: What those
conducting simulation need to know about adult
learning theory. Clinical Simulation in Nursing, 6(1),
e7-e14.
Cooper, L. A., Roter, D. L., Carson, K. A., Beach, M. C.,
Sabin, J. A., Greenwald, A. G., & Inui, T. S. (2012).
The Associations of Clinicians’ Implicit Attitudes
About Race With Medical Visit Communication and
Patient Ratings of Interpersonal Care. American
Journal of Public Health, 102(5), 979–987.
https://doi.org/10.2105/AJPH.2011.300558
Creswell, J. W. (2021). A concise introduction to mixed
methods research. SAGE Publications.
De Mattei, L., Morato, M. Q., Sidhu, V., Gautam, N.,
Mendonca, C. T., Tsai, A., Hammer, M., Creighton-
Wong, L., & Azzam, A. (2024). Are Artificial
Exploring Healthcare Virtual Simulation Modality Preferences an Interprofessional Learning Experience
897
Intelligence Virtual Simulated Patients (AI-VSP) a
valid Teaching Modality for Health Professional
Students? Clinical Simulation in Nursing, 92, 101536.
https://doi.org/10.1016/j.ecns.2024.101536
Endacott, J. L. (2014). Negotiating the Process of Historical
Empathy. Theory & Research in Social Education,
42(1),4–34.
https://doi.org/10.1080/00933104.2013.826158
Fujii, A. (2024). Exploring autonomy support and learning
preferences in higher education: Introducing a flexible
and personalized learning environment with
technology.DiscovEduc,3(26).
https://doi.org/10.1007/s44217-024-00111-z
Hadhazy, A. (2022, December 14). Where and who you are
in VR has a real impact, study finds. Stanford Report.
Retrieved October 28, 2024, from
https://news.stanford.edu/stories/2022/12/vr-real-
impact-study-find
Javaid, M., Haleem, A., & Singh, R. P. (2023). ChatGPT
for healthcare services: An emerging stage for an
innovative perspective. BenchCouncil Transactions on
Benchmarks, Standards and Evaluations, 3(1), 100105.
https://doi.org/10.1016/j.tbench.2023.100105
Kohler, T., Matzler, K., & Füller, J. (2009). Avatar-based
innovation: Using virtual worlds for real-world
innovation. Technovation, 29(6), 395–407.
https://doi.org/10.1016/j.technovation.2008.11.004
Kyrlitsias, C., & Michael-Grigoriou, D. (2022). Social
interaction with agents and avatars in immersive virtual
environments: A survey. Frontiers in Virtual Reality,
2(10),786665.
https://doi.org/10.3389/frvir.2021.786665
Lanza-Postigo, M., Abajas-Bustillo, R., Martin-Melón, R.,
Ruiz-Pellón, N., & Ortego-Maté, C. (2024). The
Effectiveness of Simulation in the Acquisition of
Socioemotional Skills Related to Health Care: A
Systematic Review of Systematic Reviews. Clinical
Simulation in Nursing, 92,101547.
https://doi.org/10.1016/j.ecns.2024.101547
Rhodes, M. L., & Curran, C. (2005). Use of the human
patient simulator to teach clinical judgment skills in a
baccalaureate nursing program. Computers,
Informatics, Nursing: CIN, 23(5), 256–262; quiz 263–
264. https://doi.org/10.1097/00024665-200509000-
00009
Rider, E. A., Kurtz, S., Slade, D., Longmaid III, H. E., Ho,
M. J., Pun, J. K. H., ... & Branch Jr, W. T. (2014). The
International Charter for Human Values in Healthcare:
An interprofessional global collaboration to enhance
values and communication in healthcare. Patient
Education and Counselling, 96(3), 273-280.
Rippon, L., Zipp, G. P., Snowdon, L., Cobb, L., Downer,
M. C., MacGregor, A., ... & Maffucci, D. M. (2023).
Interprofessional active learning environment
employing virtual reality simulation to promote
telehealth practices and psychosocial well-being.
Journal of Allied Health, 52(4), 258-267.
Violato, E., MacPherson, J., Edwards, M., MacPherson, C.,
& Renaud, M. (2023). The use of simulation best
practices when investigating virtual simulation in
healthcare: A scoping review. Clinical Simulation in
Nursing,79(28-39).
https://doi.org/10.1016/j.ecns.2023.03.001
Vogelsang, L., Wright, S., Risling, T., de Padua, A., Leidl,
D., Wilson, J., & Thompson, D. (2024). Exploring the
use of immersive virtual reality in nursing education: A
scoping review. Clinical Simulation in Nursing, 97,
101648. https://doi.org/10.1016/j.ecns.2024.101648
Wani, T. M., Gunawan, T. S., Qadri, S. A. A., Kartiwi, M.,
& Ambikairajah, E. (2021). A Comprehensive Review
of Speech Emotion Recognition Systems. IEEE Access,
9,47795–47814.
https://doi.org/10.1109/ACCESS.2021.3068045
Zhai, X., Chu, X., Chai, C. S., Jong, M. S. Y., Istenic, A.,
Spector, M., ... & Li, Y. (2021). A review of artificial
intelligence (AI) in education from 2010 to 2020.
Complexity, 2021.
ERSeGEL 2025 - Workshop on Extended Reality and Serious Games for Education and Learning
898