Authors:
Johannes Schneider
1
;
Leona Kruse
2
and
Isabella Seeber
3
Affiliations:
1
Department of Computer Science and Information Systems, University of Liechtenstein, Vaduz, Liechtenstein
;
2
Department of Information Systems, University of Agder, Norway
;
3
Department of Management, Technology and Strategy, Grenoble Ecole de Management, France
Keyword(s):
Large Language Models, Education, Children, Adolescent.
Abstract:
Large language models like ChatGPT are increasingly used by people from all age groups. They have already started to transform education and research. However, these models are also known to have a number of shortcomings, i.e., they can hallucinate or provide biased responses. While adults might be able to assess such shortcomings, the most vulnerable group of our society – children – might not be able to do so. Thus, in this paper, we analyze responses to commonly asked questions tailored to different age groups by OpenAI’s ChatGPT. Our assessment uses Habermas’ validity claims. We operationalize them using computational measures such as established reading scores and interpretative analysis. Our results indicate that responses were mostly (but not always) truthful, legitimate, and comprehensible and aligned with the developmental phases, but with one important exception: responses for two-year-olds.