Jarrahi, M. H., Askay, D., Eshraghi, A., & Smith, P. (2022).
Artificial intelligence and knowledge management: A
partnership between human and AI. Business Horizons.
https://doi.org/10.1016/j.bushor.2022.03.002
Jobin, A., Ienca, M., & Vayena, E. (2019). Artificial
Intelligence: The global landscape of ethics guidelines.
arXiv.Org. https://doi.org/10.48550/arXiv.1906.11668
Lehoux, P., Rivard, L., de Oliveira, R. R., Mörch, C. M., &
Alami, H. (2023). Tools to foster responsibility in
digital solutions that operate with or without artificial
intelligence: A scoping review for health and
innovation policymakers. International Journal of
Medical Informatics, 170, 104933. https://doi.org/
10.1016/j.ijmedinf.2022.104933
Leijnen, S., Aldewereld, H., van Belkom, R., Bijvank, R.,
& Ossewaarde, R. (2020). An agile framework for
trustworthy AI. NeHuAI@ ECAI, 75–78.
Lukkien, D. R. M., Nap, H. H., Buimer, H. P., Peine, A.,
Boon, W. P. C., Ket, J. C. F., Minkman, M. M. N., &
Moors, E. H. M. (2023). Toward Responsible Artificial
Intelligence in Long-Term Care: A Scoping Review on
Practical Approaches. Gerontologist, 63(1), 155–168.
Scopus. https://doi.org/10.1093/geront/gnab180
Mattu, J. A., Jeff Larson,Lauren Kirchner,Surya. (2016).
Machine Bias. ProPublica. https://www.propublica.
org/article/machine-bias-risk-assessments-in-criminal-
sentencing
Microsoft Corporation. (2022). Microsoft Responsible AI
Standard. https://blogs.microsoft.com/wp-content/
uploads/prod/sites/5/2022/06/Microsoft-Responsible-
AI-Standard-v2-General-Requirements-3.pdf
Minkkinen, M., Zimmer, M. P., & Mäntymäki, M. (2023).
Co-Shaping an Ecosystem for Responsible AI: Five
Types of Expectation Work in Response to a
Technological Frame. Information Systems Frontiers,
25(1), 103–121. Scopus. https://doi.org/10.1007/s
10796-022-10269-2
Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman,
L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T.
(2019). Model Cards for Model Reporting. Proceedings
of the Conference on Fairness, Accountability, and
Transparency, 220–229. https://doi.org/10.1145/
3287560.3287596
Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2020).
From What to How: An Initial Review of Publicly
Available AI Ethics Tools, Methods and Research to
Translate Principles into Practices. Science and
Engineering Ethics, 26(4), 2141–2168. Scopus.
https://doi.org/10.1007/s11948-019-00165-5
Murray, D. (1997). Ethics in Organizations. Kogan Page
Publishers.
Nagbøl, P. R., Müller, O., & Krancher, O. (2021).
Designing a Risk Assessment Tool for Artificial
Intelligence Systems: Vol. 12807 LNCS (p. 339).
Scopus. https://doi.org/10.1007/978-3-030-82405-
1_32
Narayanan, M., & Schoeberl, C. (2023). A Matrix for
Selecting Responsible AI Frameworks. Center for
Security and Emerging Technology. https://doi.
org/10.51593/20220029
Noiseau, P. (2023). Ethics of care and Artificial
Intelligence: The need to integrate a feminist normative
approach. In B. Prud’homme, C. Régis, G. Farnadi, V.
Dreier, S. Rubel, & C. d’Oultremont (Eds.), Missing
links in AI governance (pp. 344–358). Paris : UNESCO;
Montréal : Mila – Québec Institute of Artificial
Intelligence.
Obermeyer, Z., Nissan, R., Stern, M., Eaneff, S.,
Bembeneck, E. J., & Mullainathan, S. (2021).
Algorithmic Bias Playbook. Chicago Booth.
https://www.chicagobooth.edu/research/center-for-app
lied-artificial-intelligence/research/algorithmic-bias/pl
aybook
Oliveira, D., & Dalkir, K. (2022). Knowledge Capture for
the Design of a Technology Assessment Tool. 14th
International Joint Conference on Knowledge
Discovery, Knowledge Engineering and Knowledge
Management, 2, 185–192. https://doi.org/10.
5220/0011551400003335
Perry, N., Srivastava, M., Kumar, D., & Boneh, D. (2022).
Do Users Write More Insecure Code with AI
Assistants? (arXiv:2211.03622). arXiv. https://doi.org/
10.48550/arXiv.2211.03622
Probasco, E. (2022). A Common Language for Responsible
AI. Center for Security and Emerging Technology.
Qiang, V., Rhim, J., & Moon, A. (2023). No such thing as
one-size-fits-all in AI ethics frameworks: A
comparative case study. AI & Society. https://doi.
org/10.1007/s00146-023-01653-w
Reich, B. H., & Benbasat, I. (2000). Factors That Influence
the Social Dimension of Alignment between Business
and Information Technology Objectives. MIS
Quarterly, 24(1), 81–113.
Richards, J., Piorkowski, D., Hind, M., Houde, S., &
Mojsilović, A. (2020). A Methodology for Creating AI
FactSheets. http://arxiv.org/abs/2006.13796
Rolls Royce. (2021). The Aletheia Framework 2.0.
https://www.rolls-royce.com/~/media/Files/R/RollsRo
yce/documents/stand-alone-pages/aletheia-framework-
booklet-2021.pdf
Siala, H., & Wang, Y. (2022). SHIFTing artificial
intelligence to be responsible in healthcare: A
systematic review. Social Science and Medicine, 296.
Scopus. https://doi.org/10.1016/j.socscimed.2022.
114782
Skitka, L. J., Mosier, K. L., Burdick, M., & Rosenblatt, B.
(2000). Automation bias and errors: Are crews better
than individuals? International Journal of Aviation
Psychology, 10(1), 85–97. Scopus. https://doi.org/
10.1207/S15327108IJAP1001_5
Suresh, H., & Guttag, J. (2021). A Framework for
Understanding Sources of Harm throughout the
Machine Learning Life Cycle. ACM International
Conference Proceeding Series. Scopus. https://doi.
org/10.1145/3465416.3483305
Tursunbayeva, A., & Renkema, M. (2022). Artificial
intelligence in health-care: Implications for the job
design of healthcare professionals. Asia Pacific Journal
of Human Resources. https://doi.org/10.1111/1744-
7941.12325