Generative Artificial Intelligence for Immersive Analytics
Chaoming Wang
a
, Veronica Sundstedt
b
and Valeria Garro
c
Blekinge Institute of Technology, Karlskrona, Sweden
{chaoming.wang, veronica.sundstedt, valeria.garro}@bth.se
Keywords:
Generative Artificial Intelligence, Immersive Analytics, Extended Reality, Visualization.
Abstract:
Generative artificial intelligence (GenAI) models have advanced various applications with their ability to gen-
erate diverse forms of information, including text, images, audio, video, and 3D models. In visual computing,
their primary applications have focused on creating graphic content and enabling data visualization on tradi-
tional desktop interfaces, which help automate visual analytics (VA) processes. With the rise of affordable
immersive technologies, such as virtual reality (VR), augmented reality (AR), and mixed reality (MR), im-
mersive analytics (IA) has been an emerging field offering unique opportunities for deeper engagement and
understanding of complex data in immersive environments (IEs). However, IA system development remains
resource-intensive and requires significant expertise, while integrating GenAI capabilities into IA is still under
early exploration. Therefore, based on an analysis of recent publications in these fields, this position paper
investigates how GenAI can support future IA systems for more effective data exploration with immersive
experiences. Specifically, we discuss potential directions and key issues concerning future GenAI-supported
IA applications.
1 INTRODUCTION
Visual analytics (VA) stems from information and sci-
entific visualization and other areas like data min-
ing, which allows humans to explore complex data
using interactive visual digital interfaces (Wong and
Thomas, 2004) and address complex data-driven
tasks. Professionals can gain a deeper understand-
ing of data and explore different questions and pat-
terns using VA processes and tools. The emergence
of Extended Reality (XR), encompassing technolo-
gies such as augmented reality (AR), virtual reality
(VR), and mixed reality (MR), has brought up widely
available immersive environments (IEs) in various ap-
plications and holds great potential (Fortune Business
Insights, 2024).
With three-dimensional (3D) visualization spaces,
XR provides a new medium to present complex data
visualizations and enables immersive analytics (IA).
IA is an emerging research field that relies on IEs for
various data visualization and analytic tasks (Klein
et al., 2022). Combining IA with multisensory fea-
tures, such as haptics or sound, can further enhance
analysis options for the user (Kraus et al., 2021) or
a
https://orcid.org/0009-0005-4979-6059
b
https://orcid.org/0000-0003-3639-9327
c
https://orcid.org/0000-0002-9527-4594
personalize the user needs (McCormack et al., 2018).
While IA systems have shown advantages in many
interactive data exploration and decision-making sce-
narios, the development processes can be time-
consuming and have a high requirement for expert
knowledge. Understanding data visualizations in IEs
also requires expertise in interactive techniques and
analytical methods. On the other hand, generative ar-
tificial intelligence (GenAI) has unleashed great po-
tential across many fields with its content multimodal
generation capabilities. According to a recent report
by Gartner, Inc. (Gartner, 2024) more than 80% of
companies will have used GenAI APIs or deployed
GenAI applications by 2026. Therefore, this paper is
motivated to investigate how IA applications can ben-
efit from GenAI models.
Starting by reviewing the state-of-the-art research
on relevant topics across VA, XR and IA, this posi-
tion paper aims to identify promising trends for future
GenAI-supported IA. The rest of the paper is orga-
nized as follows. Section 2 presents relevant research
directions that could impact the development of future
GenAI-supported IA applications. Here, the work is
categorized into three main areas: (1) GenAI for Non-
Immersive Visual Analytics, (2) GenAI and XR, and
initial previous work on (3) GenAI for IA. Section 3
brings the trends together in a deeper discussion and
938
Wang, C., Sundstedt, V. and Garro, V.
Generative Artificial Intelligence for Immersive Analytics.
DOI: 10.5220/0013308400003912
Paper published under CC license (CC BY-NC-ND 4.0)
In Proceedings of the 20th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2025) - Volume 1: GRAPP, HUCAPP
and IVAPP, pages 938-946
ISBN: 978-989-758-728-3; ISSN: 2184-4321
Proceedings Copyright © 2025 by SCITEPRESS – Science and Technology Publications, Lda.
highlights interesting future directions for GenAI in
IA applications, while Section 4 concludes the work.
2 CURRENT TRENDS
As IA is an emerging field with deep roots in conven-
tional VA, the investigation starts by reviewing break-
throughs in several relevant research fields, including
GenAI for non-immersive data visualization, GenAI
for XR applications, and a potential transition toward
GenAI for immersive data analysis. We acknowledge
that prior research on GenAI for visualization has
identified key issues and challenges in general data
visualization tasks. GenAI has been integrated into
visualization tasks with different capabilities, such as
enhancing creativity, automating visualization, or as-
sisting human reasoning processes in a general per-
spective (Basole and Major, 2024). Ye et al. have also
provided a broad overview of GenAI in visualization
and highlighted challenges such as data limitations
and evaluation methods (Ye et al., 2024). However,
both works leave the integration of GenAI with VA
in IEs an under-explored field. As real-time interac-
tions and spatial awareness are essential characteriza-
tions of data visualization tasks in IEs (Marriott et al.,
2018), whether widely adopted GenAI techniques fit
those tasks still needs to be assessed. Thus, this paper
revisits recent emerging research efforts toward inte-
grating GenAI in IA fields, intending to explore the
latest trends.
2.1 GenAI for Non-Immersive VA
Although IA is an emerging field that performs
VA with multisensory interfaces in XR, the main-
stream workflow is still coined with conventional
2D VA as immersive visualization is the fundamen-
tal component for IA (Kraus et al., 2021). Con-
ventional VA workflows, including data processing,
visual mapping, and visual perception procedures,
can be adapted in IA. Therefore, insights from the
state-of-the-art combinations between GenAI, con-
ventional data visualization, and VA may be trans-
ferred into IEs, always considering the potential adap-
tations needed in a different media such as XR.
GenAI and VA have been applied in various re-
search domains across smart healthcare, industrial
data communication, and laboratory training. GenAI
models, especially pre-trained LLMs (e.g., OpenAI’s
ChatGPT), support these applications’ visual under-
standing and reasoning processes by generating addi-
tional guiding information or automatically summa-
rizing insights in text formats. For example, Chat-
GPT has been employed in many VA applications
to generate textual descriptions based on either data
input or information graphs (Gandee et al., 2024).
Deep Neural Networks (DNNs) have also been em-
ployed in VA systems to transform the graphs into
narratives (Shrestha et al., 2022). Meanwhile, the
LLM model BART (Lewis et al., 2020) was applied
to simplify lengthy textual details, as excessive in-
formation may cause information overload for users.
Additionally, procedural diagrams such as flowcharts
are essential visual tools in iterative design processes.
GenAI can update and manage these visualizations
with designers’ instructions (Balhorn et al., 2024; Al-
shareef et al., 2024).
One key direction of using GenAI for general
visualization tasks is to enhance the interpretability
of data visualization. While research on data vi-
sualizations focuses heavily on innovative ways to
present patterns and insights in an abstract, visual for-
mat, making complex information more accessible to
users, users may still struggle to understand these vi-
sualizations without the necessary domain context.
Therefore, data storytelling has been an active re-
search field that can complement abstract information
graphs with narrative context, enhancing visual un-
derstanding (Shao et al., 2024; Dykes, 2015). GenAI
has increasingly automated the creation of data nar-
ratives (Lo Duca., 2023; Str
¨
omel et al., 2024; Lo
Duca., 2024). This narrative generation capability
helps users interpret data patterns with minimal cog-
nitive load, clearly explaining data findings and their
implications. Researchers have evaluated and con-
firmed that data narratives enhance visual understand-
ing in fitness analysis (Str
¨
omel et al., 2024) by pro-
viding complementary qualitative data and more re-
flective engagement. In learning analytics, the idea
of using LLMs to automate data narratives has also
been adopted (Pinargote et al., 2024; Yan et al.,
2024). Whereas GenAI-based data storytelling ex-
pands the information for better visual understanding,
visual summarization of raw data input may function
in another way to enable users to grasp the overview
quickly. GenAI has been applied to create visual sum-
marization (charts and text) for any data inputs in nat-
ural language formats such as audio, video, voice, and
text datasets (Nath and Ethirajan, 2023).
Ye et al. have assessed various GenAI tech-
niques tailored for visualization in a non-immersive
context (Ye et al., 2024). These techniques include
data enhancement, automatic visualization genera-
tion, stylization, and supporting user interactions.
However, the automatic generation of data visual-
izations with precise numerical awareness remains
a phase with great potential. Several works are
Generative Artificial Intelligence for Immersive Analytics
939
emerging toward generating data visualizations with
stylizations and numerical awareness. For example,
ChartSpark (Xiao et al., 2024) has proposed a text-
to-image generative approach to create the so-called
pictorial visualizations, in which data charts guided
the numerical representations. A modern visualiza-
tion style with semantic context that, in some sce-
narios, can help enhance visual understanding for key
data insights. Another work, ChartMimic (Shi et al.,
2024), can generate similar axis-based data visualiza-
tions using proprietary and open-weight LLMs based
on structured text commands. Despite the novelty,
the results indicate limitations in generating data vi-
sualizations concerning chart inputs. Moreover, lim-
itations in current visualization datasets and issues in
effectively integrating GenAI models with VA work-
flows remain significant concerns.
2.2 GenAI and XR
GenAI and XR are converging to create innovative
applications across various fields, such as architec-
tural design, smart healthcare, and interactive learn-
ing. Most applications leverage the strengths of IEs
in visual presentations along with the capabilities
of GenAI in multimodal reasoning, enhancing nat-
ural interactions and personalization. For instance,
in CogXR (Yan and Zhang, 2024), researchers de-
veloped a graphical tool that offers visual cues for
long text in the XR space, enabling users to navigate
and comprehend the reading material swiftly. Be-
yond providing real-time assistance and feedback in
immersive learning experiences, these GenAI tech-
niques have also enhanced personalized experiences
in virtual stores (Wang et al., 2024a) and virtual exhi-
bitions (Constantinides et al., 2024; Vasic et al., 2024)
by generating recommendations based on individuals’
preferences.
Natural interaction techniques are key compo-
nents for immersive experiences and have been fa-
cilitated by many GenAI models. In architectural
design, for instance, while XR is advantageous in
presenting architectural visualizations, GenAI en-
ables non-experts to design, modify, and explore XR
spaces seamlessly, fostering accessibility and inno-
vation across sectors by integrating natural language
processing with image generation (Le et al., 2023;
Drogemuller et al., 2023; Chen et al., 2024). This
research field often intersects with 3D object gener-
ation by GenAI technology in XR. Users can create
and edit architectural spaces and objects with speech
and gesture-based interactions supported by text-to-
image GenAI, such as Stable Diffusion (Rombach
et al., 2022) and ControlNet (Zhang et al., 2023). Be-
sides, inspired by text-to-3D (Tang et al., 2023) or
text-guided image-to-3D (Gao et al., 2022) GenAI
approaches, interior designers can easily create 3D
models of architectural objects via natural language
prompts (Yeo et al., 2023).
The convergence of GenAI and XR has led
to transformative advancements in virtual conversa-
tional agents that provide real-time instructional guid-
ance and support complex cognitive tasks such as
training (Ayre et al., 2023; Chan and Liu, 2024), inter-
active learning (Hara, 2024; Cheng et al., 2024; Ka-
padia et al., 2024; Wang et al., 2024a; Hara et al.,
2024; Salehi et al., 2024), and therapeutic engage-
ment (Wang and Zhang, 2024). In healthcare scenar-
ios, the combination of GenAI and XR has been em-
ployed to rehabilitate cognitive impairments and fa-
cilitate behavioral training. Multimodal GenAI mod-
els, particularly ChatGPT, have been used for gener-
ating auditory (Yamauchi et al., 2023) or visual im-
age content (Wang and Zhang, 2024) for rehabilita-
tion. Moreover, LLM-based chatbots in VR and AR
enable users to undergo repeatable training more nat-
urally while reducing the risk of privacy exposure (Li
et al., 2024).
GenAI also plays a fundamental role in social XR
scenarios by assisting in complex decision-making or
promoting communications. LLMs such as ChatGPT
are still the majority of GenAI-based approaches in
these applications to simulate human behaviors such
as natural conversation via voice and gestures (Nu-
man et al., 2023; Si et al., 2023; Shoa et al., 2023).
Although IEs have been applied to present immersive
scenes to users, the visual content generation is still
based on 2D image generation models. For example,
in applications such as a virtual interview (Si et al.,
2023), a stable diffusion model has been adopted to
generate tiled texture images based on users’ voice
and gesture input. Moreover, creating LLM-based
chatbots in XR follows a similar idea of assisting
human cognitive processes through real-time interac-
tions.
2.3 GenAI for IA
The field of GenAI for IA has significantly fewer
prior examples compared to non-immersive VA tasks
and XR applications. While GazePointAR (Lee et al.,
2023; Lee et al., 2024) has demonstrated the capa-
bilities of pre-trained LLMs in addressing situated
analytic tasks, it focuses on enhancing interaction
methods in AR by incorporating eye-tracking and
speech interactions with ChatGPT, rather than focus-
ing on complex information visualization and analytic
tasks. The cross-modal reasoning capabilities of cur-
IVAPP 2025 - 16th International Conference on Information Visualization Theory and Applications
940
rent LLMs may fall short of addressing complex VA
tasks and may even be limited to IEs. For instance,
generating 3D data visualizations may be challenging
as it demands precise spatial accuracy to ensure mean-
ingful interpretations. Even though existing work
such as ChartMimic (Shi et al., 2024) has shown
the capability of generating axis-based data charts,
other data visualization types are less explored, such
as rational-based data visualizations (e.g., networks,
volumetric data visualization). Therefore, we envi-
sion the great potential of transforming it into IEs and
assessing its usability, which can be a good starting
point for further exploration.
Research efforts have also been identified to pro-
pose insightful perspectives on evaluating the visual
content in conventional visualization mediums and
IEs. Typically, evaluations in these fields combine
quantitative and qualitative methods to capture a com-
prehensive view, while quantitative analysis of hu-
man factors with specific metrics (e.g., task accuracy,
completion time, comprehension, and understanding)
is the primary method for evaluating the IA prop-
erties (Jamaludin et al., 2023). For instance, the
AIsop study employed a human-centered evaluation
approach to measuring the effects of AI-generated im-
agery and VR-based data storytelling on user expe-
rience (Gatti et al., 2024). In addition to standard
metrics, novel factors such as visual aesthetics and
emotional impacts are gaining recognition in the as-
sessment of XR applications, as they provide deeper
insights into user engagement and satisfaction (Stac-
chio et al., 2023). When integrating GenAI into IA
applications, though quantitative metrics are widely
adopted in the AI community, there is a need for fur-
ther exploration into combining these with human-
centered factors for a more comprehensive evaluation.
3 DISCUSSION
The integration of GenAI holds significant potential
for addressing conventional VA tasks and enhancing
XR applications separately. While research in these
two areas has yet to spark extensive dialogue on the
effective implementation of GenAI in IA, we antici-
pate that integrating GenAI into IA could significantly
enhance immersive experiences, foster creativity and
new knowledge generation, and broaden the accessi-
bility of IA systems. To advance this integration, we
outline key areas for further exploration, including
GenAI for multisensory IA, 3D data representations
for GenAI training, GenAI for prototyping IA, and
ethical considerations. Although not exhaustively, ad-
dressing these challenges could open the door to more
GenAI-supported IA applications.
3.1 GenAI for Multisensory IA
Although immersive data visualizations remain a cen-
tral focus of IA systems, these systems ultimately
strive to engage multiple senses to enhance informa-
tion perception and interaction (McCormack et al.,
2018). Research has shown that multisensory compo-
nents such as haptic feedback, spatial audio, and ol-
factory cues can significantly enhance user immersion
and understanding in IEs. For example, haptic and
auditory feedback can improve spatial awareness and
heighten the sense of presence (Wagener et al., 2022;
Kern and Ellermeier, 2020), and olfactory cues can
evoke deeper emotional engagement (Cowan et al.,
2023; Dal B
`
o et al., 2024; Wang et al., 2024b). These
findings suggest that multisensory approaches could
be valuable in IA systems.
GenAI could further enhance these multisensory
experiences by generating realistic sensory outputs in
IA based on user interactions. For instance, auditory
cues could alert users when they navigate to outliers
within a data cluster in an IA system, providing a sen-
sory guide without adding visual clutter. GenAI has
also expanded the potential for novel interaction tech-
niques in XR, which can be adapted to IA. For ex-
ample, recent work like SonoHaptic (Bonada et al.,
2016) has shown how audio and haptic feedback can
enhance the selection and understanding of data rep-
resentations in IEs, suggesting similar applications
could improve user comprehension during immersive
data exploration in IA.
Using GenAI technologies to support multisen-
sory IA experiences also makes sense in scenarios
where visual information alone is insufficient or im-
practical. GenAI’s capabilities in generating adaptive,
context-aware sensory outputs can reduce the chances
of visual overload, enabling users to focus on impor-
tant visual elements while still gaining a comprehen-
sive understanding of complex data. While visual per-
ception is often the most effective way, using GenAI
to provide non-visual data representations, such as ol-
factory and auditory cues, can enhance accessibility
for visually impaired users.
3.2 3D Data Representations for GenAI
Training
While some success has been achieved in generating
and enhancing basic visualizations using GenAI, cur-
rent applications remain limited in IA. One reason is
that most current GenAI models generate information
in textual format, realistic 2D images or videos, and
Generative Artificial Intelligence for Immersive Analytics
941
3D models. However, generating and understanding
abstract data visualizations, whether in 2D or 3D, is
more challenging as their numerical meanings should
be maintained by the visual encoding.
Bridging this gap will require innovative ap-
proaches in creating, training, and evaluating 2D and
3D data visualization generative models. A critical
first step is the development of comprehensive data
representations to serve as training datasets. While
within the visualization community, 3D data repre-
sentations for abstract data in 2D media have been
historically and justly criticized since they cause is-
sues e.g., in terms of occlusions and perspective dis-
tortion (Munzner, 2014), the use of 3D representa-
tions in immersive media should not be automatically
discarded regardless of their application, even for ab-
stract data, especially with a spatial embedding (Mar-
riott et al., 2018; Kraus et al., 2022). For example, in
situated analytics, 3D data representations can blend
more naturally in IEs and provide better immersion to
the user (Shin et al., 2024). Moreover, 3D data repre-
sentations have been adopted and studied for different
abstract data types in IEs, e.g., node-link graphs, scat-
terplots, and parallel coordinate plots (Kraus et al.,
2022).
Collecting 3D data representations for future
GenAI models may help solve complex data analy-
sis tasks in IEs. Moreover, new approaches could ad-
vance the capturing and rendering of spatial relation-
ships to transform how we analyze complex datasets
in IA systems.
3.3 GenAI for Prototyping IA
While existing experience from GenAI applications
in VA can be leveraged to enhance IA, developing
an IA system remains time-consuming, and collect-
ing meaningful user feedback poses additional chal-
lenges. Despite these obstacles, the success of GenAI
in various fields—including procedural flowchart cor-
rection (Alshareef et al., 2024; Balhorn et al., 2024),
visual object generation (Nath and Ethirajan, 2023;
Numan et al., 2023; Ye et al., 2024), and human be-
havior simulation (Bharti and Sharma, 2024; Si et al.,
2023)—suggests a potential for supporting rapidly
prototyping IA applications.
An interesting workflow of GenAI-supported IA
prototyping could involve an iterative cycle including
design, development, and evaluation phases. In the
design phase, developers would specify key parame-
ters such as data types, preferred visualization tech-
niques, interaction methods, XR devices, and evalua-
tion protocols in natural language, allowing for easy
adaptation to project needs. During the development
phase, GenAI models, particularly text-to-code gen-
erators, could automatically translate specifications
into corresponding visualizations within IEs, allow-
ing developers to focus more on the high-level de-
sign of IA applications. In the evaluation phase,
GenAI could simulate participant behaviors and gen-
erate feedback based on user studies, with input from
psychologists and cognitive experts to ensure realistic
behavior modeling. These GenAI-driven evaluations
could be automatically analyzed, generating reports
for subsequent design iterations.
Despite its potential, we also envision several
challenges in applying this human-GenAI collabo-
rative IA prototyping approach. First, GenAI mod-
els trained on general visualization types or domain
datasets may struggle to generate or adapt highly
domain-specific visualizations accurately. This issue
is compounded by the evolving nature of XR hard-
ware and software, as updates or new devices may
require recalibration of the GenAI-generated visual-
izations to ensure compatibility and optimal perfor-
mance. A standardized protocol for future XR de-
vices may help address these problems. Furthermore,
as immersive experiences often involve real-time data
integration, maintaining synchronization between the
GenAI model and live data feeds can be challenging.
Addressing these challenges will require GenAI mod-
els that are flexible and capable of continuous learn-
ing and adaptation to different data streams and IA
environments, which currently remains a significant
hurdle for practical deployment.
3.4 Ethical Issues of GenAI for IA
The integration of GenAI into IA may introduce new
ethical challenges while the widely discussed ethi-
cal concerns in AI and XR remain. Recently, Hu et
al. have proposed metrology for ethics and data pri-
vacy issues in AI-enriched human-centered XR sys-
tems (Hu et al., 2024), suggesting most data privacy
issues and ethical concerns should be considered in
GenAI-supported IA systems. We highlight ethical
issues regarding data privacy and interoperability of
GenAI-supported IA systems and propose possible
solutions from the perspectives of IA researchers and
developers.
Some IA applications may continuously learn and
adapt based on real-time user data for personalized
experiences, which may threaten data privacy. Large
pre-trained GenAI models applied to IA may increase
the risk as user data is often processed in cloud cen-
ters rather than locally (Gupta et al., 2023). The
guidance from the research communities in AI, XR,
and interdisciplinary topics can be adapted to address
IVAPP 2025 - 16th International Conference on Information Visualization Theory and Applications
942
data privacy concerns in GenAI-supported IA sys-
tems. When continuous user data collection is neces-
sary, developers should keep in mind that all user data
collected through IA systems can be linked to individ-
ual identification, which should be protected as other
private data (Carter and Egliston, 2023). A group
of tools and technologies for enhancing data privacy
in XR can be adapted in GenAI-supported IA sys-
tems, such as self-sovereign identity frameworks, de-
centralized identifiers and verifiable credentials spec-
ifications, and privacy-focused blockchain technolo-
gies (Alkaeed et al., 2024).
As the method of data visualization in IEs directly
impacts user understanding and interaction (Whitlock
et al., 2020), data visualizations automatically gen-
erated by GenAI may manipulate user focuses, lead-
ing to potential biases or misinterpretations of data.
Therefore, an ethical need is to examine how much
control GenAI should exert over users’ attention in
IEs. On the other hand, even though GenAI can po-
tentially guide data exploration in XR, users might
need to fully understand how GenAI arrived at certain
conclusions. Possible solutions might be to add ”ex-
planation layers” within the design spaces that pro-
vide insights into the AI’s decision-making process
and ensure users can evaluate the AI-generated out-
puts.
3.5 Summary
Despite the overall potential of combining GenAI into
IA applications, four future directions with key issues
are highlighted. This work first highlights that us-
ing GenAI to enhance multisensory IA experiences
would benefit immersive data exploration and gain-
ing insights. Besides, the less-explored 3D data rep-
resentations for GenAI training are spotted as a criti-
cal research area. Meanwhile, considering the design
spaces of IA, integrating GenAI agents into IA de-
sign and prototyping processes may improve the ef-
ficiency of IA system development. For actual de-
ployment phases, addressing ethical concerns, partic-
ularly regarding data privacy, bias, and transparency,
is crucial for responsible deployment. By addressing
these key issues, the synergy can provide both tech-
nical advancements and novel user-centric designs to
both GenAI and IA research communities.
4 CONCLUSIONS
This position paper adds XR as another dimension
of the integration of GenAI into visualization and
presents an overview of how they have been com-
bined in various applications. Our work illustrates
how GenAI intersects with VA and XR, highlighting
the transformative potential of these integrations in
the IA domain. We envision that GenAI will reshape
IA applications by enabling multisensory experiences
and automating complex 3D data visualizations, fos-
tering more immersive and personalized data analyt-
ics. However, achieving this vision requires advance-
ments in user-centric design, spatially aware model
training, and ethical practices. Specifically, consid-
erations around data privacy and interoperability of
GenAI models in IA systems are essential. Addition-
ally, we discuss some tools and methods to help ad-
dress these challenges, positioning GenAI as a driving
force for innovative, powerful, and ethically grounded
future IA applications.
ACKNOWLEDGEMENTS
This research was funded partly by the Knowledge
Foundation, Sweden, through the Human-Centered
Intelligent Realities (HINTS) Profile Project (contract
20220068).
REFERENCES
Alkaeed, M., Qayyum, A., and Qadir, J. (2024). Privacy
preservation in artificial intelligence and extended re-
ality (ai-xr) metaverses: A survey. Journal of Network
and Computer Applications, page 103989.
Alshareef, A., Keller, N., Carbo, P., and Zeigler, B. P.
(2024). Generative ai with modeling and simulation
of activity and flow-based diagrams. In Guisado-
Lizar, J.-L., Riscos-N
´
u
˜
nez, A., Mor
´
on-Fern
´
andez, M.-
J., and Wainer, G., editors, Simulation Tools and
Techniques, pages 95–109, Cham. Springer Nature
Switzerland.
Ayre, D., Dougherty, C., and Zhao, Y. (2023). Implementa-
tion of an artificial intelligence (ai) instructional sup-
port system in a virtual reality (vr) thermal-fluids lab-
oratory. In IMECE2023, Volume 8: Engineering Edu-
cation.
Balhorn, L. S., Caballero, M., and Schweidtmann, A. M.
(2024). Toward autocorrection of chemical process
flowsheets using large language models. In Manenti,
F. and Reklaitis, G. V., editors, 34th European Sympo-
sium on Computer Aided Process Engineering / 15th
International Symposium on Process Systems Engi-
neering, volume 53 of Computer Aided Chemical En-
gineering, pages 3109–3114. Elsevier.
Basole, R. C. and Major, T. (2024). Generative AI for Visu-
alization: Opportunities and Challenges . IEEE Com-
puter Graphics and Applications, 44(02):55–64.
Bharti, P. and Sharma, V. K. (2024). Enhancing decision-
making: The significance of humanized avatars in
Generative Artificial Intelligence for Immersive Analytics
943
complex scenarios (short paper). In Soto, A. and
Zangerle, E., editors, Joint Proceedings of the ACM
IUI 2024 Workshops co-located with the 29th Annual
ACM Conference on Intelligent User Interfaces (IUI
2024), Greenville, South Carolina, USA, March 18,
2024, volume 3660 of CEUR Workshop Proceedings.
CEUR-WS.org.
Bonada, S., Veras, R., and Collins, C. (2016). Personal-
ized views for immersive analytics. In Proceedings
of the 2016 ACM Companion on Interactive Surfaces
and Spaces, ISS ’16 Companion, page 83–89, New
York, NY, USA. Association for Computing Machin-
ery.
Carter, M. and Egliston, B. (2023). What are the risks of vir-
tual reality data? learning analytics, algorithmic bias
and a fantasy of perfect data. New Media & Society,
25(3):485–504.
Chan, A. and Liu, J. A. (2024). Board 24: Development
of multi-user-enabled, interactive, and responsive vir-
tual/augmented reality-based laboratory training sys-
tem. In 2024 ASEE Annual Conference & Exposition.
Chen, X., Gao, W., Chu, Y., and Song, Y. (2024). Enhancing
interaction in virtual-real architectural environments:
A comparative analysis of generative ai-driven reality
approaches. Building and Environment, 266:112113.
Cheng, A. Y., Guo, M., Ran, M., Ranasaria, A., Sharma,
A., Xie, A., Le, K. N., Vinaithirthan, B., Luan, S. T.,
Wright, D. T. H., Cuadra, A., Pea, R., and Landay,
J. A. (2024). Scientific and fantastical: Creating im-
mersive, culturally relevant learning experiences with
augmented reality and large language models. In Pro-
ceedings of the 2024 CHI Conference on Human Fac-
tors in Computing Systems, CHI ’24, New York, NY,
USA. Association for Computing Machinery.
Constantinides, N., Constantinides, A., Koukopoulos, D.,
Fidas, C., and Belk, M. (2024). Culturai: Explor-
ing mixed reality art exhibitions with large language
models for personalized immersive experiences. In
Adjunct Proceedings of the 32nd ACM Conference
on User Modeling, Adaptation and Personalization,
UMAP Adjunct ’24, page 102–105, New York, NY,
USA. Association for Computing Machinery.
Cowan, K., Ketron, S., Kostyk, A., and Kristofferson, K.
(2023). Can you smell the (virtual) roses? the in-
fluence of olfactory cues in virtual reality on immer-
sion and positive brand responses. Journal of retail-
ing, 99(3):385–399.
Dal B
`
o, E., Cecchetto, C., Callara, A. L., Greco, A., Mura,
F., Vanello, N., Di Francesco, F., Scilingo, E. P., and
Gentili, C. (2024). Emotion perception through the
nose: how olfactory emotional cues modulate the per-
ception of neutral facial expressions in affective disor-
ders. Translational Psychiatry, 14(1):342.
Drogemuller, A., Sakhaei, H., Cunningham, A., Yu, R.,
Gu, N., and Thomas, B. H. (2023). Envisioning
paramersive design: An immersive approach to archi-
tectural design and review. In 2023 IEEE Interna-
tional Symposium on Mixed and Augmented Reality
Adjunct (ISMAR-Adjunct), pages 261–265.
Dykes, B. (2015). Data storytelling: What it is and how
it can be used to effectively communicate analysis
results. Applied Marketing Analytics: The Peer-
Reviewed Journal, 1.
Fortune Business Insights (2024). Extended Reality (XR)
Market Size, Share & Industry Analysis, By Type
(Virtual Reality, Augmented Reality, and Mixed Re-
ality), By Industry (Healthcare, Education, Retail &
E-commerce, Gaming, Automotive, Media & En-
tertainment, and Others), and Regional Forecast,
2024-2032. https://www.fortunebusinessinsights.
com/extended-reality-market-106637. [Online; ac-
cessed 12-November-2024].
Gandee, T. J., Glaze, S. C., and Giabbanelli, P. J. (2024).
A visual analytics environment for navigating large
conceptual models by leveraging generative artificial
intelligence. Mathematics, 12(13).
Gao, J., Shen, T., Wang, Z., Chen, W., Yin, K., Li, D.,
Litany, O., Gojcic, Z., and Fidler, S. (2022). Get3d:
A generative model of high quality 3d textured shapes
learned from images. In Advances In Neural Informa-
tion Processing Systems.
Gartner (2024). Gartner says more than 80% of en-
terprises will have used generative ai apis or
deployed generative ai-enabled applications by
2026. https://www.gartner.com/en/newsroom/press-
releases/2023-10-11-gartner-says-more-than-80-
percent-of-enterprises-will-have-used-generative-
ai-apis-or-deployed-generative-ai-enabled-
applications-by-2026. [Online; accessed 13-
November-2024].
Gatti, E., Giunchi, D., Numan, N., and Steed, A. (2024).
Aisop: Exploring immersive vr storytelling leveraging
generative ai. In 2024 IEEE Conference on Virtual Re-
ality and 3D User Interfaces Abstracts and Workshops
(VRW), pages 865–866. IEEE.
Gupta, M., Akiri, C., Aryal, K., Parker, E., and Praharaj, L.
(2023). From chatgpt to threatgpt: Impact of genera-
tive ai in cybersecurity and privacy. IEEE Access.
Hara, T. (2024). Enhancing vr customer service training: A
system for generating customer queries and evaluating
trainee responses. In ACM SIGGRAPH 2024 Posters,
SIGGRAPH ’24, New York, NY, USA. Association
for Computing Machinery.
Hara, T., Maeda, H., Komatsubara, S., Taniqawa, T., and
Hirose, M. (2024). Autonomous avatar for customer
service training vr system. In 2024 IEEE Conference
on Virtual Reality and 3D User Interfaces Abstracts
and Workshops (VRW), pages 835–836.
Hu, Y., Tutschku, K., Boeva, V., Goswami, P., Abghari, S.,
and Sundstedt, V. (2024). Towards an ethical and data
privacy metrology for ai-enriched human-centered xr
systems. In IEEE International Conference on Metrol-
ogy for eXtended Reality, Artificial Intelligence and
Neural Engineering IEEE MetroXRAINE 2024, 21-
23 October, St Albans, London, UK. IEEE.
Jamaludin, N. A., Mohamed, F., Chan, V. S., Sunar, M. S.,
Selamat, A., Krejcar, O., and Iglesias, A. (2023). An-
swering why and when?: A systematic literature re-
view of application scenarios and evaluation for im-
mersive data visualization analytics. Journal of Cases
on Information Technology (JCIT), 25(1):1–29.
IVAPP 2025 - 16th International Conference on Information Visualization Theory and Applications
944
Kapadia, N., Gokhale, S., Nepomuceno, A., Cheng, W.,
Bothwell, S., Mathews, M., Shallat, J. S., Schultz, C.,
and Gupta, A. (2024). Evaluation of large language
model generated dialogues for an ai based vr nurse
training simulator. In Chen, J. Y. C. and Fragomeni,
G., editors, Virtual, Augmented and Mixed Reality,
pages 200–212, Cham. Springer Nature Switzerland.
Kern, A. C. and Ellermeier, W. (2020). Audio in vr: Effects
of a soundscape and movement-triggered step sounds
on presence. Frontiers in Robotics and AI, 7:20.
Klein, K., Sedlmair, M., and Schreiber, F. (2022). Immer-
sive analytics: An overview. it - Information Technol-
ogy, 64(4-5):155–168.
Kraus, M., Fuchs, J., Sommer, B., Klein, K., Engelke, U.,
Keim, D., and Schreiber, F. (2022). Immersive ana-
lytics with abstract 3d visualizations: A survey. Com-
puter Graphics Forum, 41(1):201–229.
Kraus, M., Klein, K., Fuchs, J., Keim, D. A., Schreiber,
F., and Sedlmair, M. (2021). The value of immersive
visualization. IEEE Computer Graphics and Applica-
tions, 41(4):125–132.
Le, M.-H., Chu, C.-B., Le, K.-D., Nguyen, T. V., Tran,
M.-T., and Le, T.-N. (2023). Vides: Virtual interior
design via natural language and visual guidance. In
2023 IEEE International Symposium on Mixed and
Augmented Reality Adjunct (ISMAR-Adjunct), pages
689–694.
Lee, J., Wang, J., Brown, E., Chu, L., Rodriguez, S. S., and
Froehlich, J. E. (2023). Towards designing a context-
aware multimodal voice assistant for pronoun disam-
biguation: A demonstration of gazepointar. In Adjunct
Proceedings of the 36th Annual ACM Symposium on
User Interface Software and Technology, UIST ’23
Adjunct, New York, NY, USA. Association for Com-
puting Machinery.
Lee, J., Wang, J., Brown, E., Chu, L. G. P., Rodriguez, S. S.,
and Froehlich, J. E. (2024). Gazepointar: A context-
aware multimodal voice assistant for pronoun disam-
biguation in wearable augmented reality. In Proceed-
ings of the 2024 CHI Conference on Human Factors
in Computing Systems.
Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mo-
hamed, A., Levy, O., Stoyanov, V., and Zettlemoyer,
L. (2020). BART: Denoising sequence-to-sequence
pre-training for natural language generation, transla-
tion, and comprehension. In Jurafsky, D., Chai, J.,
Schluter, N., and Tetreault, J., editors, Proceedings of
the 58th Annual Meeting of the Association for Com-
putational Linguistics, pages 7871–7880, Online. As-
sociation for Computational Linguistics.
Li, Z., Babar, P. P., Barry, M., and Peiris, R. L. (2024). Ex-
ploring the use of large language model-driven chat-
bots in virtual reality to train autistic individuals in
job communication skills. In Extended Abstracts of
the CHI Conference on Human Factors in Computing
Systems, CHI EA ’24, New York, NY, USA. Associa-
tion for Computing Machinery.
Lo Duca., A. (2023). Towards a framework for ai-assisted
data storytelling. In Proceedings of the 19th Inter-
national Conference on Web Information Systems and
Technologies - WEBIST, pages 512–519. INSTICC,
SciTePress.
Lo Duca., A. (2024). Using retrieval augmented generation
to build the context for data-driven stories. In Pro-
ceedings of the 19th International Joint Conference
on Computer Vision, Imaging and Computer Graph-
ics Theory and Applications - IVAPP, pages 690–696.
INSTICC, SciTePress.
Marriott, K., Chen, J., Hlawatsch, M., Itoh, T., Nacenta,
M. A., Reina, G., and Stuerzlinger, W. (2018). Immer-
sive analytics: Time to reconsider the value of 3d for
information visualisation. Immersive analytics, pages
25–55.
McCormack, J., Roberts, J. C., Bach, B., Freitas, C. D. S.,
Itoh, T., Hurter, C., and Marriott, K. (2018). Multisen-
sory immersive analytics. Immersive analytics, pages
57–94.
Munzner, T. (2014). Visualization Analysis and Design. A
K Peters Visualization Series, CRC Press, 1;1st; edi-
tion.
Nath, M. and Ethirajan, L. (2023). Infographics genera-
tor: A smart application for visual summarization. In
2023 16th International Conference on Developments
in eSystems Engineering (DeSE), pages 630–635.
Numan, N., Giunchi, D., Congdon, B., and Steed, A.
(2023). Ubiq-genie: Leveraging external frameworks
for enhanced social vr experiences. In 2023 IEEE
Conference on Virtual Reality and 3D User Interfaces
Abstracts and Workshops (VRW), pages 497–501.
Pinargote, A., Calder
´
on, E., Cevallos, K., Carrillo, G.,
Chiluiza, K., and Echeverr
´
ıa, V. (2024). Automating
data narratives in learning analytics dashboards using
genai. In LAK Workshops, pages 150–161.
Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and
Ommer, B. (2022). High-resolution image synthesis
with latent diffusion models. In Proceedings of the
IEEE/CVF conference on computer vision and pattern
recognition, pages 10684–10695.
Salehi, P., Hassan, S. Z., Baugerud, G. A., Powell, M.,
Cano, M. C. L., Johnson, M. S., Røed, R. K.,
Johansen, D., Sabet, S. S., Riegler, M. A., and
Halvorsen, P. (2024). Immersive virtual reality in
child interview skills training: A comparison of 2d
and 3d environments. In Proceedings of the 16th In-
ternational Workshop on Immersive Mixed and Virtual
Environment Systems, MMVE ’24, page 1–7, New
York, NY, USA. Association for Computing Machin-
ery.
Shao, H., Martinez-Maldonado, R., Echeverria, V., Yan, L.,
and Gasevic, D. (2024). Data storytelling in data visu-
alisation: Does it enhance the efficiency and effective-
ness of information retrieval and insights comprehen-
sion? In Proceedings of the 2024 CHI Conference on
Human Factors in Computing Systems, CHI ’24, New
York, NY, USA. Association for Computing Machin-
ery.
Shi, C., Yang, C., Liu, Y., Shui, B., Wang, J., Jing, M.,
Xu, L., Zhu, X., Li, S., Zhang, Y., Liu, G., Nie, X.,
Cai, D., and Yang, Y. (2024). Chartmimic: Evaluating
lmm’s cross-modal reasoning capability via chart-to-
code generation. arXiv preprint arXiv:2406.09961.
Generative Artificial Intelligence for Immersive Analytics
945
Shin, S., Batch, A., Butcher, P. W. S., Ritsos, P. D., and
Elmqvist, N. (2024). The reality of the situation: A
survey of situated analytics. IEEE Transactions on
Visualization and Computer Graphics, 30(8):5147–
5164.
Shoa, A., Oliva, R., Slater, M., and Friedman, D. (2023).
Sushi with einstein: Enhancing hybrid live events with
llm-based virtual humans. In Proceedings of the 23rd
ACM International Conference on Intelligent Virtual
Agents, IVA ’23, New York, NY, USA. Association
for Computing Machinery.
Shrestha, A., Mielke, K., Nguyen, T. A., and Giabbanelli,
P. J. (2022). Automatically explaining a model: Us-
ing deep neural networks to generate text from causal
maps. In 2022 Winter Simulation Conference (WSC),
page 2629–2640.
Si, J., Yang, S., Kim, D., and Kim, S. (2023). Metaverse in-
terview room creation with virtual interviewer genera-
tion using diffusion model. In 2023 IEEE Asia-Pacific
Conference on Computer Science and Data Engineer-
ing (CSDE), pages 1–4.
Stacchio, L., Scorolli, C., Marfia, G., et al. (2023). Evaluat-
ing human aesthetic and emotional aspects of 3d gen-
erated content through extended reality. In CREAI@
AI* IA, pages 38–49.
Str
¨
omel, K. R., Henry, S., Johansson, T., Niess, J., and
Wo
´
zniak, P. W. (2024). Narrating fitness: Leverag-
ing large language models for reflective fitness tracker
data interpretation. In Proceedings of the 2024 CHI
Conference on Human Factors in Computing Systems,
CHI ’24, New York, NY, USA. Association for Com-
puting Machinery.
Tang, J., Wang, T., Zhang, B., Zhang, T., Yi, R., Ma, L., and
Chen, D. (2023). Make-it-3d: High-fidelity 3d cre-
ation from a single image with diffusion prior. In Pro-
ceedings of the IEEE/CVF International Conference
on Computer Vision (ICCV), pages 22819–22829.
Vasic, I., Fill, H.-G., Quattrini, R., and Pierdicca, R. (2024).
Llm-aided museum guide: Personalized tours based
on user preferences. In Extended Reality: Inter-
national Conference, XR Salento 2024, Lecce, Italy,
September 4–7, 2024, Proceedings, Part III, page
249–262, Berlin, Heidelberg. Springer-Verlag.
Wagener, N., Ackermann, A., Savino, G.-L., D
¨
anekas, B.,
Niess, J., and Sch
¨
oning, J. (2022). Influence of passive
haptic and auditory feedback on presence and mind-
fulness in virtual reality environments. In Proceedings
of the 2022 International Conference on Multimodal
Interaction, ICMI ’22, page 558–569, New York, NY,
USA. Association for Computing Machinery.
Wang, W.-S., Lin, C.-J., Lee, H.-Y., Huang, Y.-M., and Wu,
T.-T. (2024a). Integrating feedback mechanisms and
chatgpt for vr-based experiential learning: impacts on
reflective thinking and aiot physical hands-on tasks.
Interactive Learning Environments, pages 1–18.
Wang, Y., Li, Y., Yuan, M., Fei, X., Ma, S., and Hansen,
P. (2024b). Scentclue: Enhancing story engagement
in virtual reality through hedonically varied olfactory
hints. International Journal of Human–Computer In-
teraction, pages 1–20.
Wang, Y. and Zhang, Y. (2024). Enhancing cognitive recall
in dementia patients: Integrating generative ai with
virtual reality for behavioral and memory rehabilita-
tion. In Proceedings of the 2024 6th International
Conference on Big-Data Service and Intelligent Com-
putation, BDSIC ’24, page 86–91, New York, NY,
USA. Association for Computing Machinery.
Whitlock, M., Smart, S., and Szafir, D. A. (2020). Graphi-
cal perception for immersive analytics. In 2020 IEEE
Conference on Virtual Reality and 3D User Interfaces
(VR), pages 616–625.
Wong, P. C. and Thomas, J. (2004). Visual analytics. IEEE
Computer Graphics and Applications, 24(5):20–21.
Xiao, S., Huang, S., Lin, Y., Ye, Y., and Zeng, W. (2024).
Let the Chart Spark: Embedding Semantic Con-
text into Chart with Text-to-Image Generative Model
. IEEE Transactions on Visualization & Computer
Graphics, 30(01):284–294.
Yamauchi, Y., Ino, K., and Zempo, K. (2023). Auditory vr
generative system for non-experts to reproduce human
memories through natural language interactions. In
SIGGRAPH Asia 2023 Posters, SA ’23, New York,
NY, USA. Association for Computing Machinery.
Yan, L., Zhao, L., Echeverria, V., Jin, Y., Alfredo, R.,
Li, X., Ga
ˇ
sevi’c, D., and Martinez-Maldonado, R.
(2024). Vizchat: enhancing learning analytics dash-
boards with contextualised explanations using multi-
modal generative ai chatbots. In International Con-
ference on Artificial Intelligence in Education, pages
180–193. Springer.
Yan, Y. and Zhang, W. (2024). Cogxr: An ai/xr pow-
ered graphic tool enhancing human rationality. In
Stephanidis, C., Antona, M., Ntoa, S., and Salvendy,
G., editors, HCI International 2024 Posters, pages
245–252, Cham. Springer Nature Switzerland.
Ye, Y., Hao, J., Hou, Y., Wang, Z., Xiao, S., Luo, Y., and
Zeng, W. (2024). Generative ai for visualization: State
of the art and future directions. Visual Informatics,
8(2):43–66.
Yeo, J. Q., Wang, Y., Tanary, S., Cheng, J., Lau, M., Ng,
A. B., and Guan, F. (2023). Aicrid: Ai-empowered
cr for interior design. In 2023 IEEE International
Symposium on Mixed and Augmented Reality Adjunct
(ISMAR-Adjunct), pages 837–841.
Zhang, L., Rao, A., and Agrawala, M. (2023). Adding con-
ditional control to text-to-image diffusion models. In
Proceedings of the IEEE/CVF International Confer-
ence on Computer Vision, pages 3836–3847.
IVAPP 2025 - 16th International Conference on Information Visualization Theory and Applications
946