GenAI as a Learning Assistant, an Empirical Study in
Higher Education
Lukas Spirgi
a
and Sabine Seufert
b
Institute for Educational Management and Technologies, University of St. Gallen, Switzerland
Keywords: Artificial Intelligence (AI), Learning, AI Literacy, Higher Education.
Abstract: This empirical study investigates the use of Generative Artificial Intelligence (GenAI) as a learning assistant
in higher education and explores the impact of AI literacy on its frequency of use. Based on theoretical models
of cognitive and metacognitive learning strategies, the study analyzes how students use GenAI to support
these learning processes. Several usage scenarios were developed to explore how GenAI can be used as a
learning assistant for cognitive and metacognitive learning strategies. The results show that GenAI is
predominantly used to support cognitive learning strategies, such as explaining complex concepts and
summarizing texts, while its use to support metacognitive learning strategies, such as self-regulation and
learning planning, is less frequent. The study is based on an online survey of 266 students from the University
of St.Gallen. Using the AI literacy model of Ng, four dimensions of AI competence (affective, behavioral,
cognitive, and ethical) are measured, with the behavioral dimension identified as a significant predictor of
GenAI usage for learning activities. Developing targeted programs to promote practical AI literacy is
considered necessary to facilitate the integration of GenAI into learning processes and realize its potential
more fully.
1 INTRODUCTION
There are many different ways in which AI can be
used in education. Several studies are exploring the
opportunities and risks of new AI technologies for
educational organizations (Adiguzel et al., 2023;
Michel-Villarreal et al., 2023; Zhang & Aslan, 2021).
The use of AI tools in universities is currently the
subject of heated debate. Some view this type of
technological support as a form of cheating, mainly
when students use AI to assist with tasks like writing
papers or generating content. Others, however,
believe that collaborating with AI tools represents the
future of contemporary education, where such
technologies can enhance learning outcomes and
foster new ways of acquiring and applying knowledge
(Schneider, 2023).
A previous study by Spirgi et al. (2024) shows that
students use generative AI (GenAI) for writing
academic texts, but its potential for supporting other
learning processes remains partly underexplored. AI
technologies have and will have a significant impact
a
https://orcid.org/0000-0002-3807-6460
b
https://orcid.org/0009-0003-7182-949X
on the way we learn (Chen et al., 2020). These
changes are particularly important for students, who
need to build up a lot of knowledge in order to
complete their studies successfully.
This study focused on how university students use
GenAI for learning. In its role as a learning assistant
(LA), GenAI thus becomes an integral part of
students' learning strategy.
Research in this area is essential because GenAI
offers students many opportunities for individualized
learning and can, therefore, increase their learning
success (Alshami et al., 2023; Michel-Villarreal et al.,
2023). Early studies have already shown that students
use AI to learn (von Garrel et al., 2023).
2 AI IN ACADEMIC LEARNING
2.1 AI as a Learning Assistant
As highlighted in the introduction, GenAI tools can
be utilized to support individualized learning for
Spirgi, L. and Seufert, S.
GenAI as a Learning Assistant, an Empir ical Study in Higher Education.
DOI: 10.5220/0013199300003932
Paper published under CC license (CC BY-NC-ND 4.0)
In Proceedings of the 17th International Conference on Computer Supported Education (CSEDU 2025) - Volume 2, pages 27-34
ISBN: 978-989-758-746-7; ISSN: 2184-5026
Proceedings Copyright © 2025 by SCITEPRESS Science and Technology Publications, Lda.
27
students. The AI tool becomes a learning assistant
when GenAI becomes an integral part of a student's
learning strategy. There are two types of learning
strategies that AI can be used for.
Learning strategies can be defined as internal
programs for controlling learning processes. Learning
strategies guide intentional learning (Schnotz, 2011).
A distinction can be made between cognitive and
metacognitive learning strategies (Wild & Schiefele,
1994).
Cognitive learning strategies refer to the direct
processes used to acquire, process, and understand
knowledge (Nückles, 2021). These strategies enable
learners to take in, organize, and remember
information actively. They focus on the "what" of
learning - the specific content and how to process it.
Metacognitive learning strategies are strategies
that relate to "thinking about thinking." They involve
planning, monitoring, reflecting, and adapting one's
learning process (Hasselhorn & Andju, 2021). These
strategies help learners to manage and optimize their
cognitive processes. It is about knowing how to learn
and being able to self-regulate learning.
GenAI can function as a learning assistant for both
cognitive and metacognitive strategies. For instance,
it can provide explanations for complex concepts that
were not sufficiently covered in class, helping
students actively acquire and understand knowledge
(cognitive). Additionally, GenAI can assist in
creating a structured learning plan, enabling students
to plan, monitor, and adapt their learning process
effectively (metacognitive). For more details, see
Table 3
2.2 AI Literacy
With the advent of large language models like
ChatGTP, AI became accessible to everyone (Seufert
& Spirgi, 2024). Technological advancements have
led to increased and more complex requirements for
the digital competencies of both students and
employees (Seufert & Guggemos, 2021). To navigate
the opportunities and challenges presented by AI,
individuals must acquire a foundational knowledge of
AI and the skills to utilize and assess AI systems
effectively (Hornberger et al., 2023). These abilities
are commonly known as AI literacy (Long &
Magerko, 2020). AI literacy has been recognized as
an essential digital literacy across various disciplines
and aspects of everyday life (Kandlhofer et al., 2016;
Long & Magerko, 2020; Ng et al., 2021a; Ng et al.,
2021b). The concept of "AI literacy" is understood
and defined in multiple ways. In their study, Ng et al.
(2024) define AI literacy as a comprehensive concept
based on four dimensions: affective dimension,
behavioral dimension, cognitive dimension, and
ethical dimension.
Affective dimension: This dimension refers to
learners' emotional responses and attitudes towards
AI. It includes intrinsic motivation, self-efficacy,
career interest, and confidence when interacting with
AI. The goal is to foster positive emotions and
attitudes to support learning and engagement with AI.
Behavioral dimension: This dimension focuses on
learners' actual behaviors when interacting with AI,
including behavioral intention, engagement, and
collaboration. The aim is to measure behaviors that
indicate active participation and the application of AI
skills in learning contexts.
Cognitive dimension: This captures learners'
knowledge and higher-order thinking skills, ranging
from understanding AI principles to applying and
evaluating AI solutions.
Ethical dimension: This dimension addresses the
moral understanding and responsible use of AI
technologies. It focuses on ethical issues such as
privacy, social responsibility, transparency, and
digital security. The goal is to ensure students use AI
responsibly and reflectively in all stages of their
learning and development.
3 THE PRESENT STUDY
This study aims to assess the prevalence of using
GenAI tools as learning assistants. Additionally, this
study aims to examine how AI literacy influences the
usage of these tools in educational settings. To
explore the role of GenAI in learning strategies, we
pose three research questions:
1. How often do students use GenAI as a
learning assistant for cognitive and
metacognitive learning strategies?
2. Are there any gender differences in using AI
as a learning assistant?
3. How does AI literacy affect the frequency of
using GenAI as a learning assistant?
Ng et al. (2024) concept of AI literacy serves as the
foundation for addressing research question 3 (see
Chapter 2.2). By exploring the role of GenAI in
educational processes, this study deepens our
understanding of how AI can support learning. It
advances the field by offering a comprehensive
perspective on the integration of AI into the learning
process, with a particular focus on both cognitive and
metacognitive learning strategies.
CSEDU 2025 - 17th International Conference on Computer Supported Education
28
4 METHODS
4.1 Online Survey and Sample
A digital survey was selected for the study to
thoroughly investigate students' experiences with AI.
The survey was conducted online using the
"Qualtrics" platform between September and October
2024. All the questions were single-choice. A total of
266 students from the local university completed the
survey. The average age of the respondents is 19.7
(SD = 1.1) years. The students taking part in the
survey are studying either economics or law. Table 1
shows the composition of the sample.
Table 1: Sample.
Characteristic Absolute Percentage
Female Students 107 40.2 %
Male Students 157 59.0 %
Diverse Students 2 0.8 %
First Semester Students 253 95.1 %
Bachelor Students 8 3.0 %
Master Students 5 1.9 %
4.2 Development of Instrument
The questionnaire consisted of two sections. The first
section assessed the frequency of using GenAI as a
learning assistant. The authors of this article
developed four items covering scenarios in which
GenAI is used to support cognitive learning
strategies. The areas addressed include acquiring
knowledge, summarizing content, applying
knowledge, and explicitly explaining concepts.
Additionally, four items were developed to address
scenarios where GenAI supports metacognitive
learning strategies. These included planning the
learning process, self-monitoring, self-reflection, and
adapting learning strategies. The fully elaborated
items are presented in Table 3. Participants were
asked to indicate the frequency of their usage on a 5-
point Likert scale. The scale included the following
response options: "Never" = no use, "Rarely" = once
per semester, "Sometimes" = once per month,
"Frequently" = multiple times per month, and
"Usually" = once or more per week.
In the second part of the survey, Ng et al. (2024)
validated instrument was used to assess participants'
AI literacy. This instrument measured four
dimensions of AI literacy: the affective dimension,
the behavioral dimension, the cognitive dimension,
and the ethical dimension. Participants were
presented with 5 to 6 items for each of these
dimensions. Respondents were asked to indicate the
extent to which they agreed with each statement using
a 5-point Likert scale with the following response
options: "strongly disagree,” "disagree,” "neutral,”
"agree," and "strongly agree." An example item was:
"To what extent do you agree with the following
statement: Artificial intelligence is relevant to my
everyday life (e.g., personal, work)."
4.3 Statistical Testing
Statistical testing is performed initially using multiple
regression analysis. Multiple regression analysis
evaluates whether a dependent variable could be
predicted based on independent variables (Seber &
Lee, 2012). The significant difference is set at
α = 0.05. Assumptions of the regression, including
linearity, normality of residuals, homoscedasticity,
multicollinearity, and independence of errors, were
tested to ensure the validity of the model. Regression
analysis was performed using R.
The scales were created by calculating the mean
of their corresponding items. A total of 5 different
scales were created for this paper (see Table 2). The
Learning Assistant scale represents the use of AI as a
learning assistant. The other scales represent one
dimension of AI literacy.
The difference between two means is calculated
using an independent two-sample t-test.
5 RESULTS
5.1 Internal Consistency
The internal consistency of the scales was evaluated
using Cronbach's alpha, a measure that indicates how
closely related the items within an index are.
Table 2: Frequency of use index.
Scale Items Cronbach's Alpha
Learning Assistant
12
α = 0.87
Affective dimension 6 α = 0.80
Behavioral dimension 5 α = 0.73
Cognitive Dimension 5 α = 0.77
Ethical dimension 6 α = 0.74
All indexes showed values above 0.7, reflecting
an acceptable to a good level of internal consistency
(Cronbach, 1951).
GenAI as a Learning Assistant, an Empirical Study in Higher Education
29
Table 3: AI as Personal Learning Assistant: "I use gen AI…".
Type of use of ChatGPT
M
(SD)
Never Rarely
Sometime
s
Frequently Usually
cogitive
- to get explanations for complex
concepts that were not sufficiently
covered in class
2.9
(1.4)
22 % 17 % 24 % 20 % 18 %
- to get concise summaries of longer texts
or articles
2.9
(1.3)
21 % 17 % 25 % 24 % 14 %
- to obtain examples of how to apply
theoretical knowledge in practical
situations.
2.2
(1.2)
39 % 24 % 19 % 12 % 6 %
- to get simplified explanations of
difficult terms or theories.
3.0
(1.4)
22 % 14 % 23 % 21 % 20 %
metacognitive
- to create a structured learning plan.
1.5
(0.9)
71 % 17 % 6 % 5 % 1 %
- to check my understanding of the
learning material by testing myself or
completing exercises
1.7
(1.1)
65 % 14 % 11 % 5 % 5 %
- to receive feedback on my learning
progress and to identify areas for
improvement.
1.5
(1.0)
72 % 14 % 7 % 4 % 3 %
- to find alternative learning methods
when I have difficulty understanding the
material
1.8
(1.1)
60 % 15 % 14 % 7 % 3 %
5.2 Current Frequency of Using GenAI
as a Learning Assistant
Table 3 presents data on the usage frequency of GenAI
as a personal learning assistant across two primary
categories: cognitive and metacognitive learning
strategies. In the cognitive domain, the average
frequency of AI use is higher compared to
metacognitive strategies. The difference is statistically
significant. Overall, GenAI is used infrequently for
learning, as indicated by the generally low mean scores
across both cognitive and metacognitive tasks.
The standard deviation of the means is similar for
all items. The distributions are shown in the table. The
percentages indicate the frequency of answer choices.
The item with the highest average (mean) value is
"to get simplified explanations of difficult terms or
theories", with a mean of 3.0 (SD = 1.4). This
indicates that GenAI supports this cognitive task the
most frequently among the participants in the study.
The item with the lowest average (mean) value is
"to create a structured learning plan" in the
metacognitive category, with a mean of 1.5 (SD =
0.9). This suggests that using GenAI for this specific
metacognitive task is the least frequent.
5.3 Gender Differences
The following figure shows the gender differences in
frequency of use. For the sake of simplicity, the
gender category "Diverse" was not included in the
diagram.
Figure 1: Gender differences.
For both cognitive and metacognitive learning
strategies using GenAI, the differences in mean
scores between women and men were not statistically
significant. This indicates that there is insufficient
statistical evidence to support the existence of
systematic differences between genders on these
scales.
5.4 Frequency of Use and AI Literacy
The regression analysis shows that the model
provides a somewhat limited but sufficient
2,84
1,57
2,73
1,67
0
1
2
3
cogitive metacognitive
Women Men
CSEDU 2025 - 17th International Conference on Computer Supported Education
30
explanation of the variance in the dependent variable,
Learning Assistant. The overall model is statistically
significant.
Among the predictors, behavioral dimension
emerged as a significant positive predictor of GenAI
usage as a learning assistant, suggesting that
individuals with stronger behavioral dimension
strategies are more likely to engage with the Learning
Assistant. While not significant, the Affective
dimension showed a trend toward a positive
association. Neither the cognitive dimension nor
ethical dimension were significant predictors in the
model, indicating that these dimensions of AI
Literacy do not substantially influence the use of the
Learning Assistant in this context.
These results highlight the importance of
behavioral strategies in this learning environment,
while other AI Literacy dimensions seem to have less
influence on the use of GenAI as a Learning
Assistant.
Table 4: Regression.
Dependent variable:
Learning Assistant
Estimate P-Value
Intercept 0.0080 (0.9838)
Affective dimension 0.1967
(
0.0512
)
Behavioral dimension 0.3479 (0.0005) ***
Co
g
nitive Dimension 0.1296
(
0.1439
)
Ethical dimension -0.0402 (0.6460)
Residual Std. Erro
r
: 0.7904 (df = 261)
R
2
: 0.1938
Adjusted R
2
: 0.1814
F Statistic: 15.68
(
df = 4; 261
)
***
6 DISCUSSION
6.1 Low Frequency of Use of AI as a
Learning Assistant
The results indicate that students currently
underutilize GenAI tools as comprehensive learning
assistants. Their most frequent application is
simplifying and clarifying course materials,
particularly by providing explanations of complex
concepts not adequately covered in class or
summarizing lengthy texts and articles. This result is
consistent with the usage frequencies collected by
von Garrel et al. (2023). This suggests that AI tools
are primarily embedded within cognitive learning
strategies, focusing on acquiring and processing
knowledge. These tasks are usually concrete and less
demanding, which makes the use of GenAI more
intuitive and can lead to greater acceptance among
students.
In contrast, students significantly less frequently
use AI tools to support metacognitive learning
strategies. Activities such as creating structured
learning plans, assessing one's understanding, or
obtaining feedback on learning progress are rarely
supported by AI tools. This limited usage suggests
that students may either be unaware of the capabilities
of AI tools in these areas or may be reluctant to trust
AI in more personal and reflective processes.
One possible explanation for the higher
prevalence of cognitive applications over
metacognitive ones is that cognitive tasks are more
concrete and less complex. Asking for explanations
or summaries is a tangible activity that can be easily
integrated into existing study routines. In contrast,
metacognitive strategies, which involve planning,
monitoring, and evaluating one's learning, require a
higher level of self-awareness and self-regulation. It
is possible that students are either unfamiliar with the
relevant functions of AI tools or hesitate to use them
for tasks that require more intensive personnel.
Previous studies have consistently observed that
men tend to use AI tools more intensively than
women, particularly in contexts where AI is
employed as a writing assistant (Seufert et al., 2024;
Spirgi et al., 2024). However, this gender disparity is
no longer evident when AI is utilized as a learning
assistant. One possible explanation for this shift could
be that the functions associated with AI in educational
settings, such as obtaining explanations or
simplifying complex materials, may appeal equally to
both genders.
6.2 AI Literacy Influences AI Learning
A particularly noteworthy finding from the regression
analysis is the statistically significant impact of the
behavioral dimension on the use of GenAI as a
learning assistant. With a coefficient of 0.3479 and a
highly significant p-value of 0.0005, this dimension
shows a clear positive correlation with the frequency
of GenAI usage (see table 4). This suggests that
students' actual behaviors when interacting with AI,
such as active usage, engagement, and interaction
with AI tools, play a critical role in determining how
frequently GenAI is integrated into learning
processes.
There are several potential reasons for the strong
influence of the behavioral dimension. First, this
indicates that learners who take a proactive approach
GenAI as a Learning Assistant, an Empirical Study in Higher Education
31
toward using AI are more likely to incorporate these
technologies into their daily learning routines. The
behavior of actively engaging with AI might reflect a
higher degree of confidence in their ability to use AI
tools effectively. Engagement with AI can also be
seen as a marker of greater technological competence,
which could lower the perceived barriers to the
frequent use of AI tools like GenAI.
Additionally, the accessibility and user-friendly
nature of GenAI tools, such as ChatGPT, may
significantly reinforce this behavior. These tools are
designed for ease of use, allowing learners who are
willing to experiment with them to quickly recognize
their benefits in various learning contexts. This ease
of use influences the frequency of technology use
(Davis, 1985). As learners actively engage with these
technologies and experience positive outcomes—
such as improved learning efficiency or enhanced
understanding—this can create a positive feedback
loop, motivating further usage.
Social factors also likely play a role. Observing
peers' successful use of AI tools in collaborative
learning environments can encourage others to follow
suit. According to social learning theory, individuals
are more likely to adopt behaviors that they see
modeled successfully by others (Bandura, 1977),
further reinforcing the behavioral dimension's
influence on AI adoption.
In summary, the significant influence of the
behavioral dimension on the use of GenAI as a
learning assistant can be explained by several factors:
the willingness to adopt new technologies, the
accessibility and feedback from AI tools, and the
social and institutional context. This dimension
highlights that actual behavior and active engagement
with AI are crucial in fully realizing the potential
benefits of AI in educational settings.
7 CONCLUSIONS
7.1 Theoretical Implications
The findings of this study not only highlight the
underutilization of GenAI tools as learning assistants
but also contribute to the theoretical understanding of
their role in educational contexts. Specifically, we
have developed the concept of GenAI as a learning
assistant by distinguishing its application between
cognitive and metacognitive learning strategies.
Cognitive strategies involve processes related to
knowledge acquisition and organization, while
metacognitive strategies focus on self-regulation,
reflection, and strategic planning. Additionally, this
study integrates Ng's AI literacy framework, linking
it to the concept of GenAI as a learning assistant.
While students frequently use AI to support
cognitive learning strategies—such as simplifying
complex concepts or summarizing materials—there
is a notable gap in the use of GenAI for more
complex, metacognitive learning processes, such as
self-regulation, reflection, and strategic planning.
This indicates that the full range of GenAI's
capabilities remains largely untapped in educational
settings. A key takeaway from the regression analysis
is the significant influence of the behavioral
dimension on GenAI usage. The proactive use of AI,
including active engagement and experimentation
with AI tools, is more important than merely
possessing cognitive knowledge about AI. This
suggests that "doing"—actively using and
experimenting with AI tools—is more critical than
"knowing" in terms of effectively integrating AI into
both cognitive and metacognitive learning processes.
7.2 Practical Implications
These findings emphasize the need to foster hands-on
engagement with AI technologies in educational
environments, as behavioral engagement is a crucial
driver of AI adoption and frequent usage. There is a
need to equip students with solid competencies in AI
literacy, particularly in practical and behavioral
aspects. As AI continues to evolve and become more
integrated into various sectors, the ability to use AI
tools effectively will become increasingly critical, not
only for academic success but also for future career
readiness (Baidoo-Anu & Owusu Ansah, 2023;
Laupichler et al., 2022). To address this gap, targeted
training programs should be developed to encourage
students to explore the full spectrum of AI tools and
their applications in learning. These programs should
prioritize active engagement and provide
opportunities for students to experiment with AI
rather than focusing solely on theoretical knowledge.
By doing so, educational institutions can empower
students to leverage AI as a comprehensive learning
assistant, ultimately enhancing both their learning
outcomes and their preparedness for a technology-
driven future. University lecturers play a key role in
this process, as they can provide students with easy
access to AI tools by integrating them into their
courses. In this way, the behavioral dimension of AI
literacy can be directly addressed in the classroom.
7.3 Further Research
While this study highlights important findings
CSEDU 2025 - 17th International Conference on Computer Supported Education
32
regarding the underutilization of GenAI tools and the
significance of behavioral engagement, several areas
for future research remain. One key focus could be
exploring the specific barriers that prevent students
from fully utilizing GenAI for metacognitive learning
strategies, such as self-regulation and reflection.
Identifying whether these barriers stem from a lack of
awareness, trust issues, or insufficient AI literacy
could help inform the development of targeted
interventions.
7.4 Limitations
This study is subject to several limitations. First, the
sample primarily consisted of first-semester students,
which may limit the generalizability of the findings to
more experienced students. Additionally, the study
was conducted exclusively with students from the
local university, further restricting its scope. Lastly,
all participants were enrolled in economics-related
programs, meaning the results may not be fully
applicable to students from other academic
disciplines.
REFERENCES
Adiguzel, T., Kaya, M. H., & Cansu, F. K. (2023).
Revolutionizing education with AI: Exploring the
transformative potential of ChatGPT. Contemporary
Educational Technology, 15(3), ep429.
https://doi.org/10.30935/cedtech/13152
Alshami, A., Elsayed, M., Ali, E., Eltoukhy, A. E. E., &
Zayed, T. (2023). Harnessing the Power of ChatGPT
for Automating Systematic Review Process:
Methodology, Case Study, Limitations, and Future
Directions. Systems, 11(7), 351. https://doi.org/
10.3390/systems11070351
Baidoo-Anu, D., & Owusu Ansah, L. (2023). Education in
the Era of Generative Artificial Intelligence (AI):
Understanding the Potential Benefits of ChatGPT in
Promoting Teaching and Learning. SSRN Electronic
Journal. Advance online publication. https://doi.org/
10.2139/ssrn.4337484
Bandura, A. (1977). Social learning theory. Englewood
Cliffs.
Chen, X., Xie, H., Di Zou, & Hwang, G.‑J. (2020).
Application and theory gaps during the rise of Artificial
Intelligence in Education. 2666-920X, 1, 100002.
https://doi.org/10.1016/j.caeai.2020.100002
Cronbach, L. J. (1951). Coefficient alpha and the internal
structure of tests. Psychometrika, 16(3), 297–334.
https://doi.org/10.1007/BF02310555
Davis, F. D. (1985). A technology acceptance model for
empirically testing new end-user information systems:
Theory and results [, Massachusetts Institute of
Technology]. EndNote Tagged Import Format.
https://dspace.mit.edu/bitstream/handle/1721.1/15192/
14927137-mit.pdf
Hasselhorn, M., & Andju, S. L. (2021, November 24).
Metakognitive Lernstrategien. Dorsch Lexikon der
Psychologie. https://dorsch.hogrefe.com/stichwort/
lernstrategien-metakognitive
Hornberger, M., Bewersdorff, A., & Nerdel, C. (2023).
What do university students know about Artificial
Intelligence? Development and validation of an AI
literacy test. 2666-920X, 5, 100165. https://doi.org/10.
1016/j.caeai.2023.100165
Kandlhofer, M., Steinbauer, G., Hirschmugl-Gaisch, S., &
Huber, P. (2016). Artificial intelligence and computer
science in education: From kindergarten to university.
In 2016 IEEE Frontiers in Education Conference (FIE)
(pp. 1–9). IEEE. https://doi.org/10.1109/FIE.2016.
7757570
Laupichler, M. C., Aster, A., Schirch, J., & Raupach, T.
(2022). Artificial intelligence literacy in higher and
adult education: A scoping literature review. 2666-
920X, 3, 100101. https://doi.org/10.1016/j.caeai.
2022.100101
Long, D., & Magerko, B. (2020). What is AI Literacy?
Competencies and Design Considerations. In R.
Bernhaupt, F. '. Mueller, D. Verweij, J. Andres, J.
McGrenere, A. Cockburn, I. Avellino, A. Goguey, P.
Bjørn, S. Zhao, B. P. Samson, & R. Kocielnik (Eds.),
Proceedings of the 2020 CHI Conference on Human
Factors in Computing Systems (pp. 1–16). ACM.
https://doi.org/10.1145/3313831.3376727
Michel-Villarreal, R., Vilalta-Perdomo, E., Salinas-
Navarro, D. E., Thierry-Aguilera, R., & Gerardou, F. S.
(2023). Challenges and Opportunities of Generative AI
for Higher Education as Explained by ChatGPT.
Education Sciences, 13(9), 856. https://doi.org/10.
3390/educsci13090856
Ng, D. T. K., Leung, J. K. L., Chu, K. W. S., & Qiao, M. S.
(2021a). AI Literacy: Definition, Teaching, Evaluation
and Ethical Issues. Proceedings of the Association for
Information Science and Technology, 58(1), 504–509.
https://doi.org/10.1002/pra2.487
Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S.
(2021b). Conceptualizing AI literacy: An exploratory
review. Computers and Education: Artificial
Intelligence, 2, 100041. https://doi.org/10.1016/j.
caeai.2021.100041
Ng, D. T. K., Wu, W., Leung, J. K. L., Chiu, T. K. F., &
Chu, S. K. W. (2024). Design and validation of the AI
literacy questionnaire: The affective, behavioural,
cognitive and ethical approach. British Journal of
Educational Technology, 55(3), 1082–1104.
https://doi.org/10.1111/bjet.13411
Nückles, M. (2021, November 24). Kognitive
Lernstrategien. Dorsch Lexikon der Psychologie.
https://dorsch.hogrefe.com/stichwort/lernstrategien-
kognitive
Schneider, R. U. (2023, November 23). Chat-GPT erobert
die Universitäten: Darf der Computer die Seminararbeit
schreiben? NZZ. https://www.nzz.ch/gesellschaft/ki-
GenAI as a Learning Assistant, an Empirical Study in Higher Education
33
an-der-uni-wenn-chat-gpt-die-seminararbeit-schreibt-
ld.1766150
Schnotz, W. (2011). Pädagogische Psychologie (2.,
überarb. und erw. Aufl.). Beltz Kompakt. Beltz.
Seber, G. af, & Lee, A. J. (2012). Linear regression
analysis. John Wiley & Sons.
Seufert, S., & Guggemos, J. (2021). Zukunft der Arbeit mit
intelligenten Maschinen:Implikationen der Künstlichen
Intelligenz für die Berufsbildung - Einleitung zum
Beiheft. Steiner. https://www.alexandria.unisg.ch/
265016/
Seufert, S., & Spirgi, L. (2024). Soziotechnische
Systemgestaltung im Kontext generativer KI: eine
Konzeption in der Hochschulbildung. Interner
Arbeitsbericht.
Seufert, S., Spirgi, L., Delcker, J., Heil, J [Joana], &
Ifenthaler, D. (2024). Umgang mit KI-Robotern:
maschinelle Übersetzer, Textgeneratoren, Chatbots &
Co Eine empirische Studie bei Erstsemester-
Studierenden. In K. Kögler, J. M.-C. Schmidt, & M.
Egloffstein (Eds.), Empirische Pädagogik: 38 (1).
Lehren und Lernen mit und über Künstliche Intelligenz
in der Aus- und Weiterbildung (pp. 47–72).
Verlag Empirische Pädagogik. https://www.vep-
landau.de/produkt/empirische-paedagogik-2024-38-1-
kap-3-digital/
Spirgi, L., Seufert, S., Delcker, J., & Heil, J [Joanna]
(2024). Student Perspectives on Ethical Academic
Writing with ChatGPT: An Empirical Study in Higher
Education. Proceedings of the 16th International
Conference on Computer Supported
Education(Volume 2), 179–186.
von Garrel, J. von, Mayer, J., & Mühlfeld, M. (2023).
Künstliche Intelligenz im Studium - Eine quantitative
Befragung von Studierenden zur Nutzung von
ChatGPT & Co. https://opus4.kobv.de/opus4-h-
da/frontdoor/deliver/index/docId/395/file/befragung_k
i-im-studium.pdf
Wild, K.‑P., & Schiefele, U. (1994). Lernstrategien im
Studium: Ergebnisse zur Faktorenstruktur und
Reliabilität eines neuen Fragebogens (15(4)).
https://psycnet.apa.org/record/1996-85746-001
Zhang, K., & Aslan, A. B. (2021). AI technologies for
education: Recent research & future directions. 2666-
920X, 2, 100025. https://doi.org/10.1016/j.caeai.
2021.100025.
CSEDU 2025 - 17th International Conference on Computer Supported Education
34