Exploring Implementation Parameters of Gen AI in Companies
Maarten Voorneveld
Leiden Institute of Advanced Computer Science, Niels Bohrweg 1, 2333 CA Leiden, The Netherlands
Keywords: Gen AI, Adoption, Case Study, LLM, Implementation, Companies.
Abstract: Our work focusses on investigating Gen AI implementation, as the field is developing at such a rapid pace,
up to date research on business implementations and outcomes is limited. We systematically evaluate AI
applications, analysing challenges/opportunities. We consider adoption beyond pilot projects via a structured
approach covering factors such as technological, organizational, and environmental. Our case studies show
relevance of data quality, infrastructure, and organizational culture. The paper explores how company leaders
can support to create employee trust and deliver on an AI strategy. Companies face competition, customer
needs and regulation that shape their technology roadmaps. These complexities are exacerbated by training
data problems, internal communications, context challenges and ethics. This research finds that challenges &
strategies for responsible Generative AI deployment advocate a holistic and adaptive approach. Which
companies need to tailor each application, to achieve desired outcome.
1 INTRODUCTION
Generative artificial intelligence (Gen AI) is a
specific application of artificial intelligence. It uses
various technologies, such as large language models,
reinforcement learning algorithms and generative
models. It had been widely applied in companies for
customer support, content creation and data analysis
(Bandi et al. 2023, Bostrom, N., & Yudkowsky, E.
2014). Literature has investigated the specific ways in
which Generative AI is being implemented in
businesses and its impact on business outcomes
(Agrawal, K, 2023; Alvim, A., & Grushin, B. 2019).
However, gen AI implementation involves the
design, development, and deployment of systems to
achieve specific goals and objectives (Ghimire 2023,
Kelleher, J. D., Mac Namee, B., & D'Arcy, A. 2015).
This holistic approach is needed to positively affect
the effectiveness of generative AI implementation,
which is determined by factors such as user
satisfaction, system reliability and overall
performance (Abbeel, P., & Zaremba, W. 2019). We
systematically research Gen AI implementation from
organizational perspective, creating much needed
insight in this rapidly evolving field.
This work contributes to research on the
application of AI. The findings provide insights into
how generative AI is used. It shows benefits and
challenges of implementation, and impact on business
outcomes. The study will inform development of best
practices for the implementation and help companies
make informed decisions about the adoption of AI.
The research will aid with addressing application
challenges of gen AI tech in companies, help identify
benefits and support the impact assessment of gen AI.
In the past only few companies adopted and
deployed AI applications beyond pilot projects (Anon,
2020). This has changed with the launch of OpenAI as
it is now on every company’s radar. Organizations face
challenges in adopting and deploying AI, coming
technological, organizational, or environmental
readiness gaps. Caused by government regulations,
infrastructure costs, resources, or reliance on external
partners (Alsheibani et al., 2018). In addition, there can
be organizational obstacles with stakeholders
prioritizing automation to reduce costs, but managers
may prefer augmentation, leading to a potential
paralysis in deployment (Dedrick et al., 2013; Shollo et
al., 2020). The use of AI may challenge cultural norms
and act as a barrier for managers and customers to
accept AI technologies (Dwivedi et al., 2019). To
understand the dynamics involved in organizations
adopting AI and developing AI capabilities,
investigation into the socio-technical arrangements and
processes through which AI applications are developed
and deployed will help (Holton & Boyd, 2019).
Therefore, a deeper understanding of these challenges
and cultural obstacles, as well as strategies to overcome
them, is crucial, leading us to the question of research
of this paper:
Voorneveld, M.
Exploring Implementation Parameters of Gen AI in Companies.
DOI: 10.5220/0012618300003690
Paper published under CC license (CC BY-NC-ND 4.0)
In Proceedings of the 26th International Conference on Enterprise Information Systems (ICEIS 2024) - Volume 1, pages 665-673
ISBN: 978-989-758-692-7; ISSN: 2184-4992
Proceedings Copyright © 2024 by SCITEPRESS Science and Technology Publications, Lda.
665
What parameters to implement gen AI are used and
how do companies overcome its challenges?
This research will focus on cultural norms and
changes in organizational structures impact the
adoption and deployment of AI in business
operations. This study can benefit academic
researchers, practitioners, and policymakers
working in the field of artificial intelligence and
its applications in business by addressing how
companies can use generative AI. What the
potential benefits of generative AI are and the
related challenges companies face using
generative AI. The execution sequence of this
research has linked a theory for key metrics to
measure the impact of AI in business and
compare findings with theory to provide
suggestions. This paper will continue with a
literature review, followed by a methodology,
before displaying the outcomes and conclusions.
2 BACKGROUND
As gen AI continues to advance, it becomes crucial to
have a comprehensive understanding for assessing its
performance and evaluating its outputs. Early
discussion with a VP at an organization implementing
AI solutions for corporate clients drew attention to the
industry’s game-changing technology investments. It
highlights the ongoing rapid industrial revolution
propelled by Generative AI, citing Microsoft's
substantial investment as a testament to its
transformative potential. Emphasizing the urgency
for businesses to integrate Generative AI to avoid
obsolescence, it acknowledges Microsoft's decision
to make Azure the exclusive cloud provider as
potentially limiting accessibility. The interview
underscores the need for flexibility in adoption and
deployment strategies. Generative AI's profound
impact extends to daily software interactions,
prompting the next challenge of effectively
incorporating it into enterprise environments. This is
referred to as Case Study 0 and they recognized the
achievement in this domain, inviting discussions on
the future of AI, deep learning, and generative AI.
This company envisions the possibilities of running a
Smaller GPT models, highlighting potential benefits
in reduced data centre footprint, power consumption,
and maintenance compared to the larger Generative
AI models. Emphasizing the need to tailor a solution
to specific requirements such as sustainability or
financial advantages can to enterprise settings if that
is required.
To provide a systematic approach for assessing
generative AI, incorporating relevant concepts,
methods, and metrics are required from existing
literature. There are different types of generative AI
models and their underlying principles which should
be assessed by a variety of evaluation metrics
(Goodfellow et al., 2014; Kingma & Welling, 2013;
Radford et al., 2019). Metrics that can be used to
assess the performance of generative AI models are
enablers or inhibitors of AI use, these can be
subdivided into three main categories: technological,
organizational, and environmental (Enholmm 2021).
2.1 Organizational
Strategic orientation and organizational structure
impact the ability to successfully adopt AI. Making
organizational culture a key factor in the process to
adopt AI (Mikalef & Gupta, 2021). A culture of
innovation can encourage learning and development,
it is essential for implementing new solutions such as
AI (Mikalef & Gupta, 2021). Such changes require
the support of leaders preferably company execs to
drive adoption (Alsheiabni et al., 2018; Demlehner &
Laumer, 2020). Leaders should actively participate in
exploring the best applications of AI to establish a
culture supporting disruptive adoption (Lee et al.,
2019). Through this support initiators of change can
have resources allocated to support the adoption of
AI.
Making sure organizations are ready for change is
essential, to have the necessary resources made
available is essential for AI adoption (AlSheiabni et
al., 2018). As mentioned, this is in part adequate
budget allocation, to an extent without stringent
performance targets. Because the early days of new
solution adoption requires additional freedom to
allow employees to learn while developing the best
AI use cases (Pumplun et al., 2019). Core to the
organizational capabilities are employees with deep
and broad technical skills to create and deploy AI.
Who need to be able to collaborate with subject
matter experts of existing business processes. This is
essential to identify opportunities for AI use cases and
advocate their benefits (Pumplun et al., 2019).
Internal availability of expertise is a challenge, as it is
often allocated to running projects. Clear business
goals are required to ensure that technical and
managerial staff are trained and have availability to
develop AI based solutions for specific business
functions (Mikalef & Gupta, 2021).
ICEIS 2024 - 26th International Conference on Enterprise Information Systems
666
Employees having trust in systems is crucial for
successful implementation, this is especially so for
AI. As the impact in companies can be large, with AI
replicating partial human cognition or automating
laborious tasks there is a risk of changing employees'
roles and responsibilities impacting their livelihood
(Makarius et al., 2020). Therefore, employees need to
understand the purpose of AI and its role. They need
to understand how it will affect their responsibilities
and be able to see the benefit (Makarius et al., 2020).
Building this trust between humans and machines is a
challenging task as implementation of solutions
rarely considers emotions and empathy, which is an
aspect also absent in AI. Additionally, managers need
to be able to rely on AI systems, to do so they need a
solid understanding of tech (Keding, 2020).
Concluding from this all, companies need to develop
an AI organizational adoption strategy. To
proactively overcome the barriers and be able to reap
the benefits of AI adoption, aligning it with existing
goals (Finch et al., 2017a; Keding, 2020). Such a
strategy can be effective when it includes specific
processes, plans, and timeframes for implementation.
Requiring organization structural change processes,
collaboration options between departments, and data
governance improvement plans (Mikalef & Gupta,
2021). In terms of organizational readiness, it is
important to define the benefits of the AI solution to
the organizational goals and strategy (Pumplun et al.,
2019). Where higher levels of adoption and use of AI
are observed when there is a strong fit between
technology and the business goals. This should be
achieved through a use case definition addressing
how problems will be solved through AI and enhance
business performance (Mishra & Pani, 2020;
Alsheiabni et al., 2018). Vice versa, companies must
be able to adapt their business processes to
requirements of AI for successful implementation.
2.2 Technological
Large data sets are used to train models, putting data
at the core of AI development (Schmidt et al., 2020).
The quality of this data being used in the training
models is crucial. Often the "garbage-in, garbage-
out" is a fundamental principle for AI is mentioned
(Lee et al., 2019). This can be overcome by dealing
with common challenges in data quality, these
include completing datasets, labelling data, filtering
incorrect entries, and removing noise or other
disruptions in the data. Data scientists need to closely
collaborate with engineering teams to identify and
mitigate data quality problems (Baier et al., 2019).
Data can also suffer from an introduced bias at
various stages of its use cycle, during generation for
instance by priming, through selective collection, or
faulty processing, it is essential this is addressed to
reduce negative consequences (Ntoutsi et al., 2020).
Utilizing a suitable infrastructure is a requirement
in the process of AI adoption. Having sufficient
computing power and of the correct instance type,
developing workable algorithms that can train on the
quality data sets (Wamba-Taguimdje et al., 2020).
The algorithms are often complex with data sets being
enormous thus requiring massive amounts of
computing power (Baier et al., 2019). This has
significant impact on companies and most
organizations may not have such resources available
(Schmidt et al., 2020). To address this many
companies are utilizing the services of cloud-based
solutions for machine learning infrastructure (Borges
et al., 2020). This option has democratized the
development of AI, giving organizations access to the
necessary resources for AI adoption (Schmidt et al.,
2020; Wang et al., 2019). In conclusion, quality data
free from bias requires collaboration with data
scientists. The right technology infrastructure is
essential enablers of AI adoption in organizations.
This includes suitable computing power and
algorithms, critical for developing quality AI
applications, often via cloud-based solutions.
2.3 Environmental
A strong driving factor for AI adoption is that
companies seek to gain a competitive advantage over
their competition by developing and adopting
innovations (Demlehner & Laumer, 2020) Their
customers can play a crucial role when demanding
specific goods or services. To meet these needs
companies must consider how to leverage their
knowledge in the process of AI adoption (Coombs et
al., 2020).
Government policies and regulations also play a
crucial role in shaping the ethical and moral aspects
of AI adoption. The General Data Protection
Regulation (GDPR), enforced in the European Union
(EU) and the European Economic Area (EEA) in May
2018, regulates the processing of personal data and
has implications for organizations using AI solutions
as they struggle to comply with data protection
requirements (Pumplun et al., 2019). GDPR increases
the complexity of AI deployment as organizations
need to anonymize data sets to comply with the law,
which can hinder the use of intelligent, self-learning
algorithms. Intellectual property issues related to AI
algorithms and data sets can also pose legal
challenges to AI adoption (Demlehner & Laumer,
Exploring Implementation Parameters of Gen AI in Companies
667
2020). Additionally, industry-specific regulations and
external circumstances can impact AI adoption, with
highly regulated sectors like healthcare facing
additional challenges (Coombs et al., 2020)
Addressing ethics is crucial when adopting AI
systems possess capabilities displacing human
output. As it has the effect of interconnecting humans
and machines to a level not previously achieved. In
doing so it is essential that applications are developed
based on ethical principles and do not contain
unknown biases (Coombs et al., 2020). Typical issues
with the development of AI are lack of transparency,
unconscious bias, and potentially discrimination.
Being data-driven AI can produce biased outcomes if
the underlying data is unbalanced or inherently
discriminatory, but also can be influenced by the
biases of system developers (Baier et al., 2019).
Public and private bodies can support generating
transparency, accountability, safety and security,
societal and environmental well-being, design for
universal access, and human agency and oversight
(European Commission, 2019a; European
Commission, 2019b). In conclusion gen AI requires a
comprehensive evaluation that considers key
concepts, methods, and metrics. These factors will be
further detailed in the Methodology section, to
understand the performance and reliability of
generative AI in complex decision-making processes.
3 METHODOLOGY
For the mutual benefit and protection of Authors and
This study employs a qualitative case study approach,
where we have set up data collection through in-depth
interviews with key AI implementation leader in
various companies. These companies are on the
forefront of applying gen AI. The companies are
AWS, Microsoft, Open AI and the interviews are
aimed at determining how they enable the adoption of
AI data collected will be analyzed using thematic
analysis to identify key themes and patterns in the
data. The first step in assessing gen AI
implementation is conceptualizing what it is, to
provide an overview of the different aspects that need
to be considered in the assessment of gen AI. It
includes evaluation metrics for diversity and novelty,
it looks at the application realism and fidelity. But it
also enquires into its robustness and generalization
capability, whilst not shying away from ethical
considerations on interpretability and explaining
ability. Lastly issues like user acceptance usability,
and contextual factors are considered. These topics
will be addressed by questions and investigation.
Conducting case studies on companies adopting
and deploying generative AI, requires a well-defined
methodology to gather valuable insights. The method
is based on a qualitative data analysis of case study
interviews. After a review of existing literature on
generative AI adoption and deployment we have
identified key parameters, challenges, and best
practices discussed in the literature. These have been
tailored to the companies selected based on their
prominence in the generative AI space, they are
AWS, Microsoft, a Tech Unicorn, and OpenAI,
selected for their significant contributions. In
conducting and reporting this research, we have
ensured ethical standards, by obtaining informed
consent from interviewees and we anonymize the data
to protect the identity of participants.
We have conducted semi-structured interviews
with senior leaders or key stakeholders at each
company. Based on an interview guide focusing on
parameters considered and implementation
challenges. The output has been transcribed and
applied qualitative analysis tools to generated coded
interview data using a thematic analysis approach. To
validate the findings, we used multiple data sources
(interviews, documents, reports) and we share
preliminary findings with interviewees for validation.
Finally, we present findings here through a
comprehensive report to illustrate key point and
provide a detailed discussion of how challenges were
overcome. This methodology aims to provide a
thorough understanding of the parameters considered
and challenges overcome during the generative AI
implementation journey in the selected companies. It
combines insights from interviews with rigorous
qualitative data analysis to enhance the credibility and
reliability of the research.
3.1 Organizational
The integration of gen AI into an environment, such
as a website, application, or messaging platform,
requires the evaluation of realism and fidelity of
generative AI outputs. This includes metrics such as
human perception-based evaluations and cross
checking this with the output (Xu et al., 2018). Also,
an adversarial evaluation is required (Lucic et al.,
2018), and domain-specific evaluations are beneficial
(Zhu et al., 2017). This will help measure how
realistic output is compared to reference material
considered true or real data. The assessment of the
robustness in gen AI models requires adversarial
robustness (Madry et al., 2017) or out-of-distribution
detection (Hendrycks et al., 2018), and transfer
learning evaluation (Donahue et al., 2019). These can
ICEIS 2024 - 26th International Conference on Enterprise Information Systems
668
assess how well the generative AI models perform
under different conditions and domain shifts.
3.2 Technical
Generative AI can understand natural language,
interpret the user intent, and generate appropriate
responses, to do so it has technical requirements.
These are determined by the context of a specific
domain or industry, the availability and quality of
data, the complexity of decision-making tasks.
However human involvement in decision making
processes and the contextual factors can significantly
influence performance, reliability, and usability of
gen AI. Besides this assessing the diversity and
novelty of generative AI outputs, including metrics
such as diversity score is vital, also novelty scoring is
important (Li et al., 2021). These metrics allow for a
quantification of to what extent to which the outputs
generated are relevant, diverse and novel.
3.3 Environmental
Gen AI implementation metrics such as user
satisfaction, system reliability, and performance
create an impression of the environment (Davis,
1989), also the ease of use is vital (Nielsen, 1993),
and usefulness of output (Venkatesh et al., 2003).
Assessing these important considerations in
practicality and usability for gen AI in real-world
settings enables environmental understanding. Also,
ethical considerations in the assessment of gen AI
must be taken onboard including fairness (Verma et
al., 2018), accountability (Doshi-Velez et al., 2017),
and interpretability (Ribeiro et al., 2016). These
important aspects to consider when evaluating the
impact and implications of gen AI have led us to the
following questions to understand implementation in
real-world applications.
4 OUTCOMES
This study provided a comprehensive understanding
of the applications of gen AI technologies in
corporate settings. We have asked questions on the
implementation and the impact on business outcomes.
The results of this study provide insights into the
benefits and challenges and inform the development
of best practices.
4.1 Organizational
The cases suggest that the successful adoption of
generative AI, particularly OpenAI, involves a
combination of technical strategies, stakeholder
communication, ethical considerations, diversity,
interpretability, and refinement based on contextual
factors and user feedback.
Diverse Training Data and Robustness: All three
interviews emphasize the importance of using diverse
training data to address robustness and generalization
issues. This includes exposure to a wide range of
examples and data distributions, fine-tuning on
domain-specific data, and incorporating external
inputs.
Interpretability Challenges: Achieving
interpretability in generative AI models remains a
challenge. While attention mechanisms, saliency
mapping, and post hoc analysis are mentioned, there's
a recognition that interpretability is an ongoing
challenge, and efforts are being made to improve it.
Stakeholder Communication: Clear
communication with stakeholders is crucial.
Techniques such as visualization, explanations
alongside outputs, and clear documentation are
mentioned across interviews to make the generated
outputs interpretable and understandable to
stakeholders.
Continuous Learning for Diversity: Ensuring
diversity and novelty in outputs requires continuous
learning. This involves not only using diverse training
data but also incorporating user feedback, subjective
evaluation, and constant updates to the model with
new content and inputs.
Contextual Factors Impact Implementation:
Contextual factors, such as domain-specific
considerations, data availability, complexity of
decision-making tasks, and human involvement, have
a significant impact on the implementation of
generative AI. This impact is seen in the need for
collaboration with domain experts, ethical and legal
considerations, and the iterative nature of the
implementation process.
Human Involvement & Ethical Consideration:
Human involvement is consistently highlighted as
crucial in the implementation process, not only for
providing domain expertise but also for ethical
verification. Ethical considerations, including privacy
Exploring Implementation Parameters of Gen AI in Companies
669
and sensitivity, are integral to the development and
deployment of generative AI models.
Speed vs. Verification: The impact of contextual
factors, especially ethical and regulatory constraints,
can slow down the implementation process.
Verification steps, including ethical checks and
human interaction, are deemed necessary and
contribute to a more cautious and responsible
deployment of generative AI.
Iterative Model Refinement: The need for
continuous model refinement is evident, with
feedback loops from users, experts, and new data
being integral to addressing biases, errors, and
ensuring the latest input is represented in the outputs.
4.2 Technological
From technical perspective generative AI models are
a multifaceted challenge, requiring a combination of
qualitative and quantitative methods, a clear
understanding of task-specific metrics, and an
ongoing commitment to addressing subjectivity and
improving evaluation strategies. Stakeholder
engagement, transparency, and an iterative approach
to model refinement are critical aspects of successful
assessment and decision-making in adoption.
Diversity in Assessment Approaches: Interviewees
employ diverse approaches for assessing the realism
and fidelity of generative AI outputs. While one
interviewee did not provide an answer, others use a
mix of qualitative and quantitative methods,
including visual inspection, metrics like Inception
Score and FID, and a combination of both.
Benchmarking Challenges: The evaluation of
generative AI models faces challenges due to the lack
of definitive benchmarks, ground truth, and objective
standards for creativity and novelty. Benchmarking is
commonly done through standardized tests or
benchmarks, but interviewees acknowledge that
benchmark performance may not directly correlate
with real-world scenarios.
Evaluation Metrics: Qualitative evaluation, such as
visual inspection, is a common method used by
interviewees, often complemented by quantitative
metrics like Inception Score, FID, perplexity, and
diversity metrics. Task-specific metrics are
emphasized to ensure that the models perform well in
the intended context.
Subjectivity & Lack of Ground Truth: Challenges
include the subjectivity in human judgments,
difficulty defining evaluation metrics without ground
truth, and the potential misalignment between
benchmark data and real-world scenarios.
Addressing Challenges: To address challenges
involve ensuring high data quality, relevance of
evaluation data, and iterative improvement.
Engagement with stakeholders, soliciting feedback,
and enhancing interpretability are essential.
Using Findings for Improvement: Findings are
used to identify strengths, weaknesses, and areas for
improvement. Informed decisions are made to refine
models, adjust architecture or training parameters,
and address limitations.
Engaging Stakeholders: Stakeholder engagement is
a recurring theme, emphasizing the importance of
considering user feedback, involving domain experts,
and aligning models with real-world needs.
Collaboration with stakeholders is essential for
refining models and making informed decisions.
Continuous Improvement and Documentation:
Iterative model refinement is a key strategy,
involving continuous monitoring, refinement, and
adaptation based on assessment results.
Documentation of insights and regular updates
contribute to a culture of continuous improvement.
Emphasis on Transparency and Diversity:
OpenAI, as mentioned in case 3, actively solicits
feedback and insights from diverse perspectives,
emphasizing transparency in evaluation practices.
This aligns with the broader industry trend toward
openness and inclusivity.
4.3 Environmental
Organizations are currently contending with the
ethical complexities presented by generative AI
models, demonstrating a collective dedication to
mitigating biases, fostering user acceptance, and
harmonizing models with organizational objectives.
The integration of generative AI necessitates the
careful navigation of ethical considerations, the
assurance of user acceptance and usability, alignment
with organizational goals, and the resolution of
distinct challenges in gauging effectiveness. A
prevalent and unifying element in the implementation
process is the ongoing pursuit of improvement,
propelled by continuous evaluation, user feedback,
ICEIS 2024 - 26th International Conference on Enterprise Information Systems
670
and iterative refinement, which stands as a central
theme across diverse cases.
Ethical Considerations and Bias Mitigation: All
cases recognize the ethical challenges associated with
generative AI, particularly in training data and user
prompting. Preprocessing training data is a shared
concern, including efforts to reduce biases,
misinformation, and imbalances. Ongoing
monitoring, external audits, and user feedback play
crucial roles in ensuring adherence to ethical
guidelines and bias reduction.
User Acceptance and Usability: A feedback loop is
consistently emphasized across all cases, involving
explicit and implicit feedback, user testing, surveys,
and interviews. User-centric design principles guide
the evaluation of usability, with a focus on continuous
improvement based on user input.
Factors for Evaluating User Satisfaction and
Usefulness: Different use-cases for generative AI
models are acknowledged, including decision-
making aid and Retrieval Augmented Generation
(RAG). Factors such as effectiveness, efficiency, user
interface design, and relevance of outputs are
considered for evaluating user satisfaction and
usefulness.
Alignment with Organizational Goals: Strategies
for alignment vary, with prompt engineering,
collaboration, progress reviews, and stakeholder
engagement being key themes. Continuous
involvement of stakeholders, domain experts, and
users is highlighted to ensure that generative AI
models align with organizational goals.
Measurement of Effectiveness in Achieving
Outcomes: Various evaluation approaches are
discussed, including standardized tests/benchmarks,
task-specific metrics, and user satisfaction ratings.
Challenges in evaluating generative AI models are
acknowledged, requiring innovative approaches for
effectiveness measurement.
Examples of Assessment Impact: Two cases didn't
provide specific examples due to confidentiality or
the absence of relevant instances, the third case
(OpenAI) highlights the broader impact on the field.
Assessment findings influence real-world decision-
making, leading to the identification of improvement
areas, the development of new evaluation
methodologies, and the formulation of guidelines.
Common Theme: Continuous Improvement: Across
all aspects, a common theme is the emphasis on
continuous improvement. This includes refining
models based on user feedback, addressing biases,
and iterating assessment methodologies.
5 CONCLUSION & DISCUSSION
Theoretical development provided a systematic
approach to assessing gen AI implementation
observations. Where we focused on matters such as
user satisfaction, system reliability, and performance.
By adopting this we have effectively evaluated the
implementation of gen AI, this has created insights to
aid future projects by making more informed
decisions. The interviews in these three different
cases have provided valuable insights into the
complex landscape of adopting gen AI. Successful
adoption often involves a nuanced combination of
technical strategies, stakeholder communication,
ethical considerations, and a persistent commitment
to diversity, interpretability, and refinement based on
contextual factors and user feedback.
We see that availability to diversity in training
data is a recurring theme, which has been emphasized
as it addresses robustness and generalization issues.
The potential exposure to a wide range of examples
will allow for fine-tuning on domain-specific data for
companies. The incorporation of various inputs
contributes to the model's adaptability and helps
achieve interpretability, but this remains as a common
challenge. Attention mechanisms, saliency mapping,
and post hoc analysis are being employed, which to
us is highlighting the ongoing developments and
efforts in the field of AI implementation.
On the organizational side we see that effective
stakeholder communication is crucial. When
employing tactics to visualize progress, creating
explanations alongside outputs this clear
documentation can work as championing artifacts
within companies. The cases show the importance of
cultivating a culture of continuous learning for
improving diversity. Companies must involve not
only diverse training data but also user and employee
feedback, even if it is a subjective evaluation, as these
constant updates to models are invaluable.
Context is important to implementation we find,
as domain specific considerations are essential to
suitable data availability. This is required to deal with
the complexity of decision-making tasks due to
human involvement. Human engagement is crucial
for providing domain specific insight but also for
ethical verification. There is a tradeoff between speed
Exploring Implementation Parameters of Gen AI in Companies
671
and verification which comes up in ethical and
regulatory constraints. Where a cautious and
responsible deployment of gen AI leads to quality
outcomes but hampers impact due to being slower.
From a technical perspective gen AI models
present many challenges. To address these requires a
combination of solutions, there is need for a clear
understanding of task-specific metrics. Also, we find
that a commitment to addressing subjectivity and
improving evaluation strategies is important. Further
the key stakeholder engagement drives the project
forward. Where transparency and iterative working
helps model refinement, this has emerged as a critical
aspect for successful adoption of AI.
We find that both qualitative and quantitative
methods are employed to evaluate the realism and
fidelity of generative AI outputs. The lack of
benchmarks as objective standards for creativity and
novelty are not available. Despite these challenges,
stakeholders actively engage in continuous
improvement, using findings to refine models, adjust
architecture or training parameters, and address these
limitations to the best of their abilities.
When reviewing the ethical considerations, they
have appeared to be at the forefront of organizational
priorities. There is a shared commitment to
addressing biases, ensuring user acceptance, and
aligning models with organizational goals. However
bias mitigation strategies linked with user feedback
mechanisms, and iterative improvement are
considered to have limited impact on resolving these
issues. The cases show commitment to transparency,
diversity, in ongoing practices, to align with broader
industry trends toward openness and inclusiveness.
In conclusion, the cases collectively paint a
comprehensive picture of the multifaceted challenges
and strategies involved in the adoption of generative
AI. Our findings have underscored the need for
holistic and adaptive approaches. Where we clearly
see emphasis towards ongoing learning, stakeholder
collaboration, commitment to ethical and transparent
practices. However, the road to responsible
deployment of gen AI models is still wrought with
ample challenges and opportunities for improvement.
REFERENCES
Abbeel, P., & Zaremba, W. (2019). Generative AI: An AI
research laboratory consisting of the for-profit
corporation Generative AI LP and its parent company,
the non-profit Generative AI Inc. arXiv preprint
arXiv:1906.11691.
Agrawal, K, (2023). Towards Adoption of Generative AI in
Organizational Settings. Journal of computer
information systems.
Alsheibani, Sulaiman, Yen Cheung, and Chris Messom.
"Artificial Intelligence Adoption: AI-readiness at Firm-
Level." PACIS 4 (2018): 231-245.
Alvim, A., & Grushin, B. (2019). Generative AI and the
future of artificial intelligence. Journal of the
Association for Computing Machinery, 66(7), 1-19.
Baier, L., Jöhren, F., & Seebacher, S. (2019b). Challenges
in the deployment and operation of machine learning in
practice. In Proceedings of the 27th European
Conference on Information Systems (ECIS),
Stockholm, Sweden
Bandi, Venkata Adapa and Kuchi 2023, The Power of
Generative AI: A Review of Requirements, Models,
Input–Output Formats, Evaluation Metrics, and
Challenges. Future Internet 2023, 15(8), 260;
https://doi.org/10.3390/fi15080260.
Borges, A. F., Laurindo, F. J., Spínola, M. M., Gonçalves,
R. F., & Mattos, C. A. (2020). The strategic use of
artificial intelligence in the digital era: Systematic
literature review. International Journal of Information
Management, 102225
Bostrom, N., & Yudkowsky, E. (2014). The ethics of
artificial intelligence. Cambridge University Press.
Boyd, R., & Holton, R. "Technology, innovation,
employment and power: Does robotics and artificial
intelligence really mean social transformation?"
Journal of Sociology 54.3 (2019): 331-345.
Coombs, C., Hislop, D., Taneva, S. K., & Barnard, S.
(2020). The strategic impacts of Intelligent Automation
for knowledge and service work. The Journal of
Strategic Information Systems, 29(4), 101600
Davis, F. D. (1989). Perceived usefulness, perceived ease
of use, and user acceptance of information technology.
MIS Quarterly, 13(3), 319-340.
Dedrick, Jason, Kenneth L. Kraemer, and Eric Shih.
"Information technology and productivity in developed
and developing countries." Journal of Management
Information Systems 30.1 (2013): 97-122.
Demlehner, Q., & Laumer, S. (2020). Shall we use it or not?
Explaining the adoption of artificial intelligence for car
manufacturing purposes In Proceedings of the 28th
European Conference on Information Systems (ECIS),
Doshi-Velez, F., Kim, B., & Wood, A. (2017). Towards a
rigorous science of interpretable machine learning.
arXiv preprint arXiv:1711.01547.
Donahue, J., Girshick, R., & Darrell, T. (2019). Adversarial
feature learning. In Advances in Neural Information
Processing Systems (pp. 2394-2404).
Dwivedi, Yogesh K., et al. "Re-examining the unified
theory of acceptance and use of technology (UTAUT):
Towards a revised theoretical model." Information
Systems Frontiers 21 (2019): 719-734.
European Commission. (2019a/b). Ethics guidelines for
trustworthy AI. Proposal for a Regulation laying down
harmonised rules on artificial intelligence.
Finch, G., Goehring, B., & Marshall, A. (2017). The
enticing promise of cognitive computing: high-value
ICEIS 2024 - 26th International Conference on Enterprise Information Systems
672
functional efficiencies and innovative enterprise
capabilities. Strategy & Leadership, 45(6), 26–33
Ghimire, Kim & Acharya, 2023, Generative AI in the
Construction Industry. ArXiv.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014).
Generative adversarial nets. In Advances in Neural
Information Processing Systems (pp. 2672-2680).
Hendrycks, D., Mazeika, M., & Dietterich, T. (2018). Deep
anomaly detection with outlier exposure.
arXiv:1812.04606.
Jaiswal, A., & Mahalle, P. N. (2021). Generative AI
Chatbot Framework and Metrics for Implementation.
International Journal of Advanced Science and
Technology, 30(1), 1281-1290.
Keding, C. (2020). Understanding the interplay of artificial
intelligence and strategic management. Management
Review Quarterly, 71(1), 91–134.
Kelleher, J. D., Mac Namee, B., & D'Arcy, A. (2015). Data-
driven modeling & scientific computation: Methods for
complex systems & big data. CRC Press.
Kingma, D. P., & Welling, M. (2013). Auto-encoding
variational Bayes. arXiv
Lee, J., Suh, T., Roy, D., &Baucus, M. (2019). Emerging
technology and business model innovation: the case of
artificial intelligence. Journal of Open Innovation:
Technology, Market, and Complexity, 5(3), 44
Li, J., Huang, Y., Zhang, S., & Zhu, T. (2021). A
Comprehensive Review of Generative AI Evaluation
Metrics. arXiv preprint arXiv:2104.07660.
Makarius, E. E., Mukherjee, D., Fox, J. D., & Fox, A. K.
(2020). Rising with the machines: A sociotechnical
framework for bringing artificial intelligence into the
organization. Journal of Business Research, 120, 262–
273Keding, T. (2020).
Mikalef, P, & Gupta, M. (2021). Artificial Intelligence
Capability: Conceptualization, measurement
calibration, and empirical study on its impact on
organizational creativity and firm performance.
Information & Management,
Mishra, A. N., & Pani, A. K. (2020). Business value
appropriation roadmap for artificial intelligence. VINE
Journal of Information and Knowledge Management
Systems, 51(3), 353–368
Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl,
W., Vidal, M. E., & Krasanakis, E. (2020). Bias in data-
driven artificial intelligence systems—An introductory
survey. Wiley Interdisciplinary Reviews: Data Mining
and Knowledge Discovery, 10(3), e1356
Pumplun, L., Tauchert, C., & Heidt, M. (2019b). A new
organizational chassis for artificial intelligence-
exploring organizational readiness factors. In:
Proceedings of the 27th European Conference on
Information Systems (ECIS).
Schmidt, R., Zimmermann, A., Moehring, M., & Keller, B.
(2020). Value creation in connectionist artificial
intelligence–A research agenda
Wamba-Taguimdje, S. L., Wamba, S. F., Kamdjoug, J. R.
K., &Wanko, C. E. T. (2020). Influence of artificial
intelligence (AI) on firm performance: the business
value of AI-based transformation projects. Business
Process Management Journal, 26(7), 1893–1924.
Wang, H., Huang, J., & Zhang, Z. (2019). The impact of
deep learning on organizational agility. In proceedings
of the 40th International Conference on Information
Systems (ICIS), Munich, Germany
Wu, Y., Zhang, Y., Zhu, Q., & Zhou, A. (2020). Towards
an Evaluation Framework for Generative AI Systems.
International Conference on Artificial Intelligence in
Information and Communication (pp. 180-187).
Exploring Implementation Parameters of Gen AI in Companies
673