and verification which comes up in ethical and
regulatory constraints. Where a cautious and
responsible deployment of gen AI leads to quality
outcomes but hampers impact due to being slower.
From a technical perspective gen AI models
present many challenges. To address these requires a
combination of solutions, there is need for a clear
understanding of task-specific metrics. Also, we find
that a commitment to addressing subjectivity and
improving evaluation strategies is important. Further
the key stakeholder engagement drives the project
forward. Where transparency and iterative working
helps model refinement, this has emerged as a critical
aspect for successful adoption of AI.
We find that both qualitative and quantitative
methods are employed to evaluate the realism and
fidelity of generative AI outputs. The lack of
benchmarks as objective standards for creativity and
novelty are not available. Despite these challenges,
stakeholders actively engage in continuous
improvement, using findings to refine models, adjust
architecture or training parameters, and address these
limitations to the best of their abilities.
When reviewing the ethical considerations, they
have appeared to be at the forefront of organizational
priorities. There is a shared commitment to
addressing biases, ensuring user acceptance, and
aligning models with organizational goals. However
bias mitigation strategies linked with user feedback
mechanisms, and iterative improvement are
considered to have limited impact on resolving these
issues. The cases show commitment to transparency,
diversity, in ongoing practices, to align with broader
industry trends toward openness and inclusiveness.
In conclusion, the cases collectively paint a
comprehensive picture of the multifaceted challenges
and strategies involved in the adoption of generative
AI. Our findings have underscored the need for
holistic and adaptive approaches. Where we clearly
see emphasis towards ongoing learning, stakeholder
collaboration, commitment to ethical and transparent
practices. However, the road to responsible
deployment of gen AI models is still wrought with
ample challenges and opportunities for improvement.
REFERENCES
Abbeel, P., & Zaremba, W. (2019). Generative AI: An AI
research laboratory consisting of the for-profit
corporation Generative AI LP and its parent company,
the non-profit Generative AI Inc. arXiv preprint
arXiv:1906.11691.
Agrawal, K, (2023). Towards Adoption of Generative AI in
Organizational Settings. Journal of computer
information systems.
Alsheibani, Sulaiman, Yen Cheung, and Chris Messom.
"Artificial Intelligence Adoption: AI-readiness at Firm-
Level." PACIS 4 (2018): 231-245.
Alvim, A., & Grushin, B. (2019). Generative AI and the
future of artificial intelligence. Journal of the
Association for Computing Machinery, 66(7), 1-19.
Baier, L., Jöhren, F., & Seebacher, S. (2019b). Challenges
in the deployment and operation of machine learning in
practice. In Proceedings of the 27th European
Conference on Information Systems (ECIS),
Stockholm, Sweden
Bandi, Venkata Adapa and Kuchi 2023, The Power of
Generative AI: A Review of Requirements, Models,
Input–Output Formats, Evaluation Metrics, and
Challenges. Future Internet 2023, 15(8), 260;
https://doi.org/10.3390/fi15080260.
Borges, A. F., Laurindo, F. J., Spínola, M. M., Gonçalves,
R. F., & Mattos, C. A. (2020). The strategic use of
artificial intelligence in the digital era: Systematic
literature review. International Journal of Information
Management, 102225
Bostrom, N., & Yudkowsky, E. (2014). The ethics of
artificial intelligence. Cambridge University Press.
Boyd, R., & Holton, R. "Technology, innovation,
employment and power: Does robotics and artificial
intelligence really mean social transformation?"
Journal of Sociology 54.3 (2019): 331-345.
Coombs, C., Hislop, D., Taneva, S. K., & Barnard, S.
(2020). The strategic impacts of Intelligent Automation
for knowledge and service work. The Journal of
Strategic Information Systems, 29(4), 101600
Davis, F. D. (1989). Perceived usefulness, perceived ease
of use, and user acceptance of information technology.
MIS Quarterly, 13(3), 319-340.
Dedrick, Jason, Kenneth L. Kraemer, and Eric Shih.
"Information technology and productivity in developed
and developing countries." Journal of Management
Information Systems 30.1 (2013): 97-122.
Demlehner, Q., & Laumer, S. (2020). Shall we use it or not?
Explaining the adoption of artificial intelligence for car
manufacturing purposes In Proceedings of the 28th
European Conference on Information Systems (ECIS),
Doshi-Velez, F., Kim, B., & Wood, A. (2017). Towards a
rigorous science of interpretable machine learning.
arXiv preprint arXiv:1711.01547.
Donahue, J., Girshick, R., & Darrell, T. (2019). Adversarial
feature learning. In Advances in Neural Information
Processing Systems (pp. 2394-2404).
Dwivedi, Yogesh K., et al. "Re-examining the unified
theory of acceptance and use of technology (UTAUT):
Towards a revised theoretical model." Information
Systems Frontiers 21 (2019): 719-734.
European Commission. (2019a/b). Ethics guidelines for
trustworthy AI. Proposal for a Regulation laying down
harmonised rules on artificial intelligence.
Finch, G., Goehring, B., & Marshall, A. (2017). The
enticing promise of cognitive computing: high-value