comparing these findings with previous research, the
survey results will facilitate more focused and precise
planning of faculty training needs on site. On the
other hand, by contributing to the collection of data
on AI trends in education, this approach will enable
cross-border insights and support the design of a
common value space, providing support and training
on the topics that present the greatest challenges at
this time, where technological changes are rapid, but
adaptation to these changes takes time and requires a
systematic approach.
Therefore, before conducting the survey, the
authors analysed previously published literature, with
the results appearing in a separate article titled
“Integrating Artificial Intelligence in Higher
Education: A Literature Review of Current Trends,
Challenges, and Future Directions” (Safiulina et al.,
2024). The key findings from this article have been
integrated with the results of the current literature
review, focusing on more recent studies.
2 LITERATURE REVIEW
Growing research (Kung et al, 2023; Lund, 2023; Lee
et al., 2024; Lepik, 2024; Peres et al, 2023; Rahman
& Watanobe, 2023; Safiulina et al., 2024; Strzelecki,
2023; Ward et al, 2024) has explored faculty
perceptions of generative AI tools in higher
education, highlighting the expanding role of AI in
various academic tasks beyond teaching, such as
research activities, project-based work, and
administrative duties like drafting emails and
providing feedback. These tools support personalized
learning, enhance teaching materials, provide
research input, and assist with drafting responses to
student inquiries.
In addition to these benefits, recent literature
(Safiulina et al., 2024) identifies the role of AI in
personalized and adaptive learning as one of the most
transformative applications in higher education. AI
enables tailored educational experiences by adjusting
content to individual student needs and learning
styles, improving engagement and learning
outcomes. This finding aligns with the growing
recognition of AI's ability to offer personalized
feedback and improve the efficiency of educational
delivery (Ward et al., 2024; Lee et al., 2024).
Moreover, AI's impact on assessment processes
has been increasingly noted. Automated grading
systems and AI-enabled exams offer consistent and
timely evaluations, streamlining assessment tasks.
However, alongside these efficiencies, challenges
such as ethical concerns around fairness and bias in
AI-driven assessments remain significant (Neumann,
Rauschenberger & Schön, 2023). Safiulina et al.
(2024) also highlight this concern, emphasizing the
need for clear governance frameworks to ensure
equitable use of AI in assessment and teaching.
However, significant challenges have also been
identified regarding using AI-based text generators
like ChatGPT and other generative AI tools in
education. These include concerns about reliability,
the use of biased data (Obaid & Yaseen, 2023), the
generation of inaccurate or fabricated content,
including fictitious citations (Rahman & Watanobe,
2023), and the potential for over-reliance on AI,
which may negatively affect students’ critical
thinking and problem-solving skills (Neumann,
Rauschenberger, & Schön, 2023). These concerns
align with broader discussions on AI ethics,
emphasizing the importance of transparency and
fairness in using AI systems for educational purposes
(Holmes & Miao, 2023; Yusuf, Pervin, & Román-
González, 2024).
The ethical concerns surrounding AI use are not
limited to bias. Privacy, data security, and the
responsible use of AI are prominent in the literature
(Safiulina et al., 2024). For instance, AI systems that
collect and analyse student data raise privacy and
security concerns, particularly regarding
unauthorized access to sensitive information. Higher
education institutions must address these concerns to
ensure the responsible adoption of AI tools (Safiulina
et al., 2024; Lepik, 2024).
The response from higher education institutions
(HEIs) has varied, ranging from those enforcing strict
limitations on the use of ChatGPT (Rahman &
Watanobe, 2023) to those developing guidelines on
the ethical and responsible use of AI tools (Neumann,
Rauschenberger, & Schön, 2023). Based on the
literature review for the current study, it can be
broadly stated that there is a relatively high level of
awareness and familiarity with generative AI tools
among faculty members across a wide geographical
range and multicultural backgrounds. However,
common to these studies is the finding that faculty
require support and training to address concerns
related to academic integrity (Lee et al., 2024; Ward
et al., 2024; Yusuf, Pervin, & Román-González,
2024). As Safiulina et al. (2024) suggest, there is an
urgent need for AI literacy among both educators and
students, as well as comprehensive institutional
strategies that integrate AI into professional
development programs to enhance AI competency.
As Chiu (2024) suggests, rather than issuing strict
recommendations, institutions should develop
guidelines and policies that emphasize the