Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D.,
Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G.,
Askell, A., & others. (2020). Language models are few-
shot learners. Advances in Neural Information
Processing Systems, 33, 1877–1901.
Buchanan, L. (2022). Did a Fourth Grader Write This? Or the
New Chatbot? https://www.nytimes.com/inter
active/2022/12/26/upshot/chatgpt-child-essays.html?se
archResultPosition=2
Chall, J. S., & Dale, E. (1995). Readability revisited: The new
Dale-Chall readability formula. Brookline Books.
Choi, J. H., Hickman, K. E., Monahan, A., & Schwarcz, D.
(2023). Chatgpt goes to law school. Available at SSRN.
Cukier, W., Ngwenyama, O., Bauer, R., & Middleton, C.
(2009). A critical analysis of media discourse on
information technology: Preliminary results of a
proposed method for critical discourse analysis.
Information Systems Journal, 19(2)
Flesch, R. (1948). A new readability yardstick. Journal of
Applied Psychology, 32(3), 221.
Gilbert, E., Bergstrom, T., & Karahalios, K. (2009). Blogs
are echo chambers: Blogs are echo chambers. Hawaiin
International Conference on System Sciences, 1–10.
Habermas, J. (1985). The theory of communicative action:
Volume 1: Reason and the rationalization of society.
Habermas, J., & McCarthy, T. (1987). Lifeworld and system:
A critique of functionalist reason. (No Title).
Heng, M. S. H., & De Moor, A. (2003). From Habermas’s
communicative theory to practice on the internet.
Information Systems Journal, 13(4), 331–352.
HF Canonical Model Maintainers. (2022). Distilbert-base-
uncased-finetuned-sst-2-english. Hugging Face.
Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E.,
Bang, Y. J., Madotto, A., & Fung, P. (2023). Survey of
hallucination in natural language generation. ACM
Computing Surveys, 55(12), 1–38.
Jigsaw, G. (2017). Perspective API. https://www.perspecti
veapi.com/
Kasneci, E., Seßler, K., Küchemann, S., Bannert, M.,
Dementieva, D., Fischer, F., Gasser, U., Groh, G. &
others. (2023). ChatGPT for good? On opportunities and
challenges of large language models for education.
Learning and Individual Differences, 103, 102274.
Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., & Iwasawa, Y.
(2022). Large Language Models are Zero-Shot
Reasoners. arXiv Preprint arXiv:2205.11916.
Kortemeyer, G. (2023). Could an Artificial-Intelligence
agent pass an introductory physics course? arXiv Preprint
arXiv:2301.12127.
Leidner, D., & Tona, O. (2021). The CARE Theory of
Dignity Amid Personal Data Digitalization. Management
Information Systems Quarterly, 45(1).
Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., & Neubig,
G. (2021). Pre-train, prompt, and predict: A systematic
survey of prompting methods in natural language
processing. arXiv Preprint arXiv:2107.13586.
Longo, L., Brcic, M., Cabitza, F., Choi, J., Confalonieri, R.,
Del Ser, J., Guidotti, R., Hayashi, Y., Herrera, F.,
Holzinger, A., & others. (2023). Explainable artificial
intelligence (XAI) 2.0: A manifesto of open challenges
and interdisciplinary research directions. arXiv Preprint
arXiv:2310.19775.
Maree, J. G. (2021). The psychosocial development theory of
Erik Erikson: Critical overview. Early Child
Development and Care, 191(7–8), 1107–1121.
Mikalef, P., Conboy, K., Lundström, J. E., & Popovič, A.
(2022). Thinking responsibly about responsible AI and
‘the dark side’ of AI. European Journal of Information
Systems, 31(3), 257–268.
Mökander, J., Schuett, J., Kirk, H. R., & Floridi, L. (2023).
Auditing large language models: A three-layered
approach. AI and Ethics, 1–31.
OpenAI. (2023). GPT-4 Technical Report. arXiv Preprint
arXiv:2303.08774.
Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C.,
Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A.,
& others. (2022). Training language models to follow
instructions with human feedback. Advances in Neural
Information Processing Systems.
Papalia, D., & Martorell, G. (2023). Experience Human
Development (15th ed.). McGraw Hill.
Papalia, D., Olds, S., & Feldman, R. (2008). Human
Development. McGraw-Hill Education.
Pérez, J. M., Giudici, J. C., & Luque, F. (2021).
pysentimiento: A Python Toolkit for Sentiment Analysis
and SocialNLP tasks.
Porra, J., Lacity, M., & Parks, M. S. (2020). “Can Computer
Based Human-Likeness Endanger Humanness?” – A
Philosophical and Ethical Perspective on Digital
Assistants Expressing Feelings They Can’t Have”.
Information Systems Frontiers, 22(3), 533–547.
Schlagwein, D., Cecez-Kecmanovic, D., & Hanckel, B.
(2019). Ethical norms and issues in crowdsourcing
practices: A Habermasian analysis. Information Systems
Journal, 29(4), 811–837.
Schneider, J., Abraham, R., Meske, C., & Brocke, J. V.
(2022). Artificial Intelligence Governance For
Businesses. Information Systems Management.
Schneider, J., Meske, C., & Kuss, P. (2024). Foundation
Models. Business Information Systems Engineering.
Schneider, J., Richner, R., & Riser, M. (2023). Towards
trustworthy autograding of short, multi-lingual, multi-
type answers. International Journal of Artificial
Intelligence in Education, 33(1), 88–118.
Schneider, J., Schenk, B., Niklaus, C., & Vlachos, M. (2023).
Towards LLM-based Autograding for Short Textual
Answers. arXiv Preprint arXiv:2309.11508.
Schöbel, S., Schmitt, A., Benner, D., Saqr, M., Janson, A., &
Leimeister, J. M. (2023). Charting the Evolution and
Future of Conversational Agents: A Research Agenda
Along Five Waves and New Frontiers. Information
Systems Frontiers.
Stahl, B. C., Doherty, N. F., & Shaw, M. (2012). Information
security policies in the UK healthcare sector: A critical
evaluation. Information Systems Journal, 22(1), 77–94.
https://doi.org/10.1111/j.1365-2575.2011.00378.x
Young, A. G. (2018). Using ICT for social good: Cultural
identity restoration through emancipatory pedagogy.
Information Systems Journal, 28(2), 340–358.
Zhao, W. X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y.,
Min, Y., Zhang, B., Zhang, J., Dong, Z., & others. (2023).
A survey of large language models. arXiv Preprint
arXiv:2303.18223.