Practices, Challenges, and Training Needs of Faculty in Terms of
Generative AI
Britt Petjärv
1
, Vitali Retsnoi
2
, Anne Uukkivi
2
, Monica Vilms
3
, Elena Safiulina
2
and Oksana Labanova
2
1
Centre for Humanities and Economics, TTK University of Applied Sciences, Pärnu mnt 62, Tallinn, Estonia
2
Centre for Sciences, TTK University of Applied Sciences, Pärnu mnt 62, Tallinn, Estonia
3
Institute of Engineering and Circular Economy, TTK University of Applied Sciences, Pärnu mnt 62, Tallinn, Estonia
Keywords: Generative AI (GenAI), Faculty Perceptions, Higher Education, Training Needs, Academic Integrity.
Abstract: This paper investigates the role of Generative AI (GenAI) tools in higher education at TTK University of
Applied Sciences (TTK UAS), Estonia, the largest applied sciences university in Estonia providing higher
education in engineering. Through a survey of 81 faculty members, it examines the use of GenAI in teaching,
research, and administrative tasks, highlighting patterns of usage, perceived benefits, challenges, and training
needs. The findings reveal that while GenAI is seen as a valuable asset in personalized learning and efficient
task management, concerns about reliability, ethical implications, and workload dynamics persist. The study
emphasizes the importance of targeted training to address these challenges and support the effective
integration of GenAI tools in higher education.
1 INTRODUCTION
Generative AI (GenAI) is a type of artificial
intelligence (AI) technology capable of generating
new and unique outputs, such as images, text, audio,
videos, 3D models (Holmes & Miao, 2023).
Due to
its ability to produce sophisticated and realistic
content that reflects human creativity, GenAI has
become a valuable tool in various industries,
including education, entertainment, and product
design (Castelli & Manzoni, 2022).
Since the launch of ChatGPT (Chat Generative
Pre-trained Transformer - the fastest-growing app in
history to date, based on large language models) at the
end of 2022, extensive discussions and widespread
research have emerged, raising both concerns and
innovative ideas for enhancing higher education
(
Holmes & Miao, 2023). Based on studies and general
trends published over the past two years, it can be
stated that AI technologies have the potential to
significantly transform teaching and learning in
higher education (Holmes & Miao, 2023
; Ward et al,
2024).
However, several challenges remain, including
addressing ethical and quality concerns within higher
education, navigating implementation issues, and
utilizing appropriate pedagogical frameworks. To
best support faculty members at TTK UAS in
navigating these challenges, it was necessary to first
map out the existing knowledge, attitudes, and
practices related to AI. This foundation would allow
for targeted training to address the identified gaps.
More specifically, the aim of the study is to determine
the current use of GenAI tools in both the planning
and delivery of teaching, as well as in other activities
associated with faculty work (such as projects,
research, and administrative tasks) at TTK UAS, to
identify challenges and support their resolution
through targeted training initiatives. Accordingly, the
following research questions were formulated:
What is the level of awareness and familiarity
with GenAI tools among faculty at TTK UAS?
In which work fields are these tools most used?
What are the perceived potentials and concerns
regarding GenAI across different fields at TTK
UAS?
What AI-related training and guidance do
faculty members need to ensure that the use of
AI tools is both effective and aligned with
academic ethical principles?
On one hand, by mapping the opportunities and
risks associated with the use of AI-based applications
in the local higher education landscape and
Petjärv, B., Retsnoi, V., Uukkivi, A., Vilms, M., Safiulina, E. and Labanova, O.
Practices, Challenges, and Training Needs of Faculty in Terms of Generative AI.
DOI: 10.5220/0013287600003932
Paper published under CC license (CC BY-NC-ND 4.0)
In Proceedings of the 17th International Conference on Computer Supported Education (CSEDU 2025) - Volume 1, pages 83-91
ISBN: 978-989-758-746-7; ISSN: 2184-5026
Proceedings Copyright © 2025 by SCITEPRESS – Science and Technology Publications, Lda.
83
comparing these findings with previous research, the
survey results will facilitate more focused and precise
planning of faculty training needs on site. On the
other hand, by contributing to the collection of data
on AI trends in education, this approach will enable
cross-border insights and support the design of a
common value space, providing support and training
on the topics that present the greatest challenges at
this time, where technological changes are rapid, but
adaptation to these changes takes time and requires a
systematic approach.
Therefore, before conducting the survey, the
authors analysed previously published literature, with
the results appearing in a separate article titled
“Integrating Artificial Intelligence in Higher
Education: A Literature Review of Current Trends,
Challenges, and Future Directions” (Safiulina et al.,
2024). The key findings from this article have been
integrated with the results of the current literature
review, focusing on more recent studies.
2 LITERATURE REVIEW
Growing research (Kung et al, 2023; Lund, 2023; Lee
et al., 2024; Lepik, 2024; Peres et al, 2023; Rahman
& Watanobe, 2023; Safiulina et al., 2024; Strzelecki,
2023; Ward et al, 2024) has explored faculty
perceptions of generative AI tools in higher
education, highlighting the expanding role of AI in
various academic tasks beyond teaching, such as
research activities, project-based work, and
administrative duties like drafting emails and
providing feedback. These tools support personalized
learning, enhance teaching materials, provide
research input, and assist with drafting responses to
student inquiries.
In addition to these benefits, recent literature
(Safiulina et al., 2024) identifies the role of AI in
personalized and adaptive learning as one of the most
transformative applications in higher education. AI
enables tailored educational experiences by adjusting
content to individual student needs and learning
styles, improving engagement and learning
outcomes. This finding aligns with the growing
recognition of AI's ability to offer personalized
feedback and improve the efficiency of educational
delivery (Ward et al., 2024; Lee et al., 2024).
Moreover, AI's impact on assessment processes
has been increasingly noted. Automated grading
systems and AI-enabled exams offer consistent and
timely evaluations, streamlining assessment tasks.
However, alongside these efficiencies, challenges
such as ethical concerns around fairness and bias in
AI-driven assessments remain significant (Neumann,
Rauschenberger & Schön, 2023). Safiulina et al.
(2024) also highlight this concern, emphasizing the
need for clear governance frameworks to ensure
equitable use of AI in assessment and teaching.
However, significant challenges have also been
identified regarding using AI-based text generators
like ChatGPT and other generative AI tools in
education. These include concerns about reliability,
the use of biased data (Obaid & Yaseen, 2023), the
generation of inaccurate or fabricated content,
including fictitious citations (Rahman & Watanobe,
2023), and the potential for over-reliance on AI,
which may negatively affect students’ critical
thinking and problem-solving skills (Neumann,
Rauschenberger, & Schön, 2023). These concerns
align with broader discussions on AI ethics,
emphasizing the importance of transparency and
fairness in using AI systems for educational purposes
(Holmes & Miao, 2023; Yusuf, Pervin, & Román-
González, 2024).
The ethical concerns surrounding AI use are not
limited to bias. Privacy, data security, and the
responsible use of AI are prominent in the literature
(Safiulina et al., 2024). For instance, AI systems that
collect and analyse student data raise privacy and
security concerns, particularly regarding
unauthorized access to sensitive information. Higher
education institutions must address these concerns to
ensure the responsible adoption of AI tools (Safiulina
et al., 2024; Lepik, 2024).
The response from higher education institutions
(HEIs) has varied, ranging from those enforcing strict
limitations on the use of ChatGPT (Rahman &
Watanobe, 2023) to those developing guidelines on
the ethical and responsible use of AI tools (Neumann,
Rauschenberger, & Schön, 2023). Based on the
literature review for the current study, it can be
broadly stated that there is a relatively high level of
awareness and familiarity with generative AI tools
among faculty members across a wide geographical
range and multicultural backgrounds. However,
common to these studies is the finding that faculty
require support and training to address concerns
related to academic integrity (Lee et al., 2024; Ward
et al., 2024; Yusuf, Pervin, & Román-González,
2024). As Safiulina et al. (2024) suggest, there is an
urgent need for AI literacy among both educators and
students, as well as comprehensive institutional
strategies that integrate AI into professional
development programs to enhance AI competency.
As Chiu (2024) suggests, rather than issuing strict
recommendations, institutions should develop
guidelines and policies that emphasize the
CSEDU 2025 - 17th International Conference on Computer Supported Education
84
competencies needed for the future workforce,
supported by in-class and hands-on activities. To
fully comprehend the impact of generative AI on
assessment, AI and generative AI should be
integrated into teacher professional development
programs within universities. D. Ward and his
research group emphasize that, given the
complexities of generative AI, faculty require time
and resources to effectively learn and adapt their
classes to help students engage with AI ethically and
critically; therefore, universities should creatively
develop AI-powered learning assistants, adaptive
learning systems, and faculty support tools while
focusing on inclusiveness, transparency, privacy, and
safety, all with a steadfast commitment to enhancing
human interaction and improving the quality of
teaching and learning (Ward et al., 2024). The most
comprehensive treatment of this topic has been
provided in the Guidance for generative AI in
education and research (Holmes & Miao, 2023).
Additionally, Lepik (2024) indicates that instructors
prefer regular, specific, and field-relevant training
formats to address ongoing challenges, further
underscoring the need for structured institutional
support.
3 METHODOLOGIES
3.1 Sample
This study, conducted in June 2024, utilized a
quantitative approach to assess the perceptions of
academic staff regarding the use of GenAI in
educational settings. The survey was partly based on
a similar staff survey conducted in February 2024 at
the University of Tartu, Estonia's largest university
(ranked #358 in the QS World University Rankings),
which focused solely on the use of text-generating
bots in teaching.
The data for the current study were collected
through an online survey administered via Google
Forms, ensuring ease of access and participation for
all employees. The survey was designed to be self-
administered, allowing respondents to fill out the
questionnaire independently. This approach
facilitated anonymity and encouraged honest
responses, which is critical for obtaining accurate and
reliable data.
In the 2023/24 academic year, the total number of
TTK UAS academic staff was 237. The study group
consisted of 81 lecturers who responded to the survey.
The response rate in the population was approximately
34.18%. Participants were ranked as follows by
discipline: engineering (71.60%) and social sciences
and humanities (28.40%); by teaching experience: up
to 3 years (20.99%), 4–6 years (9.88%), 7–9 years
(11.11%), and 10+ years (58.02%); and by AI training
experience: has previously participated in AI training
(51.85%) and has not previously participated in AI
training (48.15%).
3.2 Data Collection
The survey comprises five distinct subscales, each
focusing on different aspects of generative AI usage
among lecturers. These include the Purpose of Using
Generative AI, Capabilities of Using Generative AI
Tools in Teaching, Risks Associated with Using
Generative AI Tools in Teaching, Training Needs in
AI, and Workload Dynamics of Academic Staff. The
responses for each of the items are summed to give a
total score.
The ‘Purpose of Using Generative AI’ subscale
(Cronbach’s alpha 0.77) is an 8-item scale used to
measure the extent to which lecturers use AI in the
preparation and conduct of teaching, research, writing
documentation, answering emails, etc. The total score
ranges from 2 to 27, with a higher score indicating
more frequent use of generative AI.
The ‘Capabilities of Using Generative AI tools in
Teaching’ subscale (Cronbach’s alpha 0.84) is an 8-
item scale used to measure how useful generative AI
can be in teaching from a lecturer's perspective. The
total score ranges from 13 to 38, with a higher score
indicating that the lecturers see more opportunities for
using generative AI tools in teaching.
The ‘Risks Associated with Using Generative AI
Tools in Teaching’ subscale (Cronbach’s alpha 0.88)
is a 9-item scale used to measure how risky generative
AI can be in teaching from a lecturer's perspective.
The total score ranges from 14 to 45, with a higher
score indicating that the lecturers see more risks
associated with using AI tools in teaching.
The ‘Training Needs in AI’ subscale (Cronbach’s
alpha 0.82) is an 11-item scale used to assess which
topics should be covered in future AI training for
academic staff. The total score ranges from 31 to 55,
with a higher score indicating that lecturers need
more diverse training on how to effectively apply
generative AI tools in teaching.
The ‘Workload Dynamic of Academic Staff’
subscale is a 6-item scale (Cronbach’s alpha 0.89)
used to assess how the use of generative AI has
affected lecturers' workload. The total score ranges
from 0 to 18, with a higher score indicating that
lecturers perceive an increase in workload due to the
use of generative AI.
Practices, Challenges, and Training Needs of Faculty in Terms of Generative AI
85
3.3 Analysis
Firstly, the descriptive statistics for the relevant scale
variables were calculated. Then, the relationships
between these scale variables were investigated.
Finally, it was examined whether these variables
differed significantly according to independent
variables such as institution, teaching discipline
(engineering or social sciences/humanities), teaching
experience, and whether the individuals were AI
trained or not.
Data were analysed using MS Excel and the
statistical software R. Correlations between variables
were examined using Spearman’s rho coefficient.
Differences between groups were assessed using the
Wilcoxon rank-sum test and the Kruskal-Wallis rank-
sum test. Significance was set at a minimum level of
0.05, with other significance levels (0.01 and 0.001)
also reported.
4 RESULTS
The analysis of faculty awareness and usage of
GenAI tools at TTK UAS revealed varied levels of
familiarity, tool preference, and impact on academic
workload. The findings indicated that most faculty
members are acquainted with generative AI tools,
particularly text bots, with 82.72% of respondents
reporting usage in their work. This preference is
followed by image generators (29.63%), audio
generators (8.64%), and video generators (7.41%).
Notably, 16.05% of respondents do not utilize any
GenAI tools in their professional activities.
The previous question was expanded on how the
use of these tools has affected faculty’s workload?
The ‘Workload Dynamic of Academic Staff’ scale
consists of six items measured on the following scale:
0 ‘Not relevant to me,’ 1 ‘Decreased,’ 2
‘Remained the same,’ and 3 ‘Increased.’
Descriptive statistics for the relevant scale variables
were calculated and the in-depth examination
revealed that, in most cases, AI adoption led to faculty
workloads either remaining consistent or showing a
slight shift, either increasing or decreasing. It is
essential to specify that the responses reflect
perceived values when addressing how the use of
these tools has affected faculty's workload. Faculty
did not perceive a change in workload in Student
Support and Feedback (M=2.00, SD=0.58) and in
Research (M=1.82, SD=0.56). The workload
remained the same or slightly decreased in
Preparation of Teaching Materials (M=1.75,
SD=0.61), in Project Work (M=1.65, SD=0.57) and
in Administrative Duties - such as email management
and routine communication (M=1.71, SD=0.54). The
workload tended to remain stable too or showing
slight increase in Assessment of Student Work
(M=2.23, SD=0.56). Note that the workload score
tends to be higher when the lecturer's field of activity
is broader, rather than reflecting a genuine increase in
workload.
The ‘Purpose of Using Generative AI’ scale
consisted of eight items measured on the following
scale: 0 ‘Not Relevant to Me’, 1 – ‘No, I Don't Plan
to Start’, 2 ‘No, But I Plan to Start’, 3 ‘Yes,
Occasionally’, and 4 ‘Yes, Regularly’. Descriptive
statistics for the relevant scale variables were
calculated and the most frequent usage of generative
AI tools lies in preparing teaching materials (M=2.68,
SD=0.84). Generative AI tools for conducting
teaching activities in exercises and practical sessions
(M=2.32, SD=0.98), for research-related tasks
(M=2.31, SD=0.94), and for writing project
documentation (M=2.29, SD=0.95) were viewed as
promising but not yet fully adopted. The application
of AI in student-related tasks, particularly for non-
graded assignments (M=1.99, SD=0.87) and graded
evaluations (M=1.89, SD=0.90), remained relatively
low, as well the same applies also for AI assistance in
managing email correspondence (M=1.89 (SD=1.03).
Generative AI tools were the least commonly used for
guiding students and providing feedback (M=1.64,
SD=0.84).
In this survey, there were two additional questions
about how lecturers regulate the use of AI in practical
work with students and the reasons for refraining
from using AI in teaching. For these questions,
multiple-choice answers were permitted. 45.7% of
respondents allow students to use text bots freely
without altering teaching methods. 35.8% incorporate
text bots while educating students on associated risks
and opportunities. 17.3% have adjusted their teaching
methods to integrate AI tools. A small portion (4.9%)
prohibit AI use and enforce this restriction.
Common reasons for avoiding AI tools in
teaching centred on concerns about reliability,
guidance, and educational impact: 40.7% expressed
doubts about the reliability of AI-generated
information. 35.8% cited a lack of guidance on
responsible and ethical AI usage. 33.3% were
concerned about AI's impact on student knowledge.
Other notable reasons included AI errors (25.9%),
incompatibility with subjects taught (23.5%), and
difficulty in effective AI use (23.5%).
CSEDU 2025 - 17th International Conference on Computer Supported Education
86
Table 1: Means (M) and standard deviations (SD) for the ‘Capabilities of Using Generative AI Tools in Teaching’ items
(n=81).
Capabilities of Using Generative AI Tools in Teaching M SD Conclusion
Enables the quick creation of personalized learning materials 3.30 0.93 Neutral,
dissension
Makes teaching more practical and closer to real life by supporting the use of
active learning methods
3.12 1.08 Neutral,
dissension
Enables automated grading and feedback 3.26 0.95 Neutral,
dissension
Encourages experimentation with new pedagogical approaches and promotes
creativit
y
3.69 1.03 Rather Agree
Supports the development of more systematic and analytical thinking 2.85 1.12 Neutral,
dissension
Develops skills in students that will be required in the future labour market
(including expanding current career opportunities and enhancing their ability to
succee
d
in the labo
r
market
)
3.64 1.02 Rather Agree
Does not provide extra opportunities to enrich the study (the reverse scale is used) 3.80 0.97 Rather Disagree
Complements traditional teaching methods 3.88 0.97 Rather Agree
Table 2: Means (M) and standard deviations (SD) for the ‘Risks Associated with Using Generative AI Tools in Teaching’
items (n=81).
Risks Associated with Using Generative AI Tools in Teaching M SD Conclusion
Raises the risk of academic fraud because AI-generated content is difficult to
detect
4,20 0,90 Rather Agree
Produces unreliable content, which in turn threatens academic integrity and
distorts the worldview
4,01 0,93 Rather Agree
Does not support the development of students' critical thinking and makes the
learnin
g
p
rocess "co
py
-
aste"
b
ase
d
3,86 1,16 Rather Agree
Causes security risks related to the safety of users of systems that manage AI tools
an
d
to the unfai
r
use of
p
ersonal data
3,63 0,98 Rather Agree
Blurs the boundaries and principles of ethical and unethical academic behaviour 3,81 1,00 Rather Agree
It has a negative impact on the development of students' mental and emotional
health
3,00 0,92 Neutral, dissension
Reduces the component of creativity and originality in students' work 3,43 1,22 Neutral, dissension
Makes it difficult to treat students equally 3,63 1,17 Rather Agree
There are no significant risks associated with using AI-based tools in education
(
the reverse scale is used
)
3,79 1,10 Rather Disagree
The ‘Capabilities of Using Generative AI Tools in
Teaching’ and Risks Associated with Using
Generative AI Tools in Teaching’ scales each
consisted of items measured on a 5-point Likert scale:
1 ‘Strongly Disagree’, 2 ‘Disagree’, 3 ‘Neutral’,
4 ‘Agree’, and 5 ‘Strongly Agree.’ Descriptive
statistics for the relevant scale variables were
calculated and are presented in Tables 1 and 2.
Among the open responses, it was highlighted that
integrating AI provides the opportunity to break
exercises down into smaller parts or combine them
into a whole, which supports learning. It also helps to
offer different examples from practical life for the
solutions to a single task. AI-based technology has
provided new possibilities for doing things differently
and for experiencing both learning and teaching in
new ways. In the open responses, a direct threat to the
future of engineering was even emphasized,
suggesting that allowing AI-based solutions could
create a springboard for charlatans in the field, posing
a danger to society.
To evaluate the overall impact of generative AI on
higher education, a 5-point Likert scale was used: 1
‘Very Negative,’ 2 ‘Rather Negative,’ 3 ‘Neutral,’
4 – ‘Rather Positive,’ and 5 – ‘Very Positive. 61.73%
of participants rated it as rather or very positive,
20.99% as neutral, and 17.28% as rather or very
negative. In assessing the overall impact, speed-
related aspects (more information, faster) were the
most frequently mentioned in the comments. As a
counterpoint to the ability to process information
more quickly, open responses also highlighted that
Practices, Challenges, and Training Needs of Faculty in Terms of Generative AI
87
the issue is not just about the best technical solution
but about what it should enable in one’s ongoing
work—something that isn’t always clear, with
significant time potentially spent on adjusting
workflows (meta-work). Overall, it was noted that
achieving a positive impact requires proper guidance
and the parallel development of supportive skills such
as critical thinking and creativity; otherwise, the role
of independent thinking may diminish. Descriptive
statistics and correlations of the relevant scale
variables and AI overall impact rate were calculated
for the total sample (Table 3 and 4). The results
indicate, similarly to the previous findings, that the
participants rated the overall impact of generative AI
tools on higher education as rather positive on a 5-
point Likert scale (M=3.49, SD=0.88).
There was also a moderate positive correlation
between the ‘Capabilities of Using Generative AI’
and ‘Training Needs in AI’ (ρ=0.30, p=0.0058), as
well as theOverall Impact Rate of AI (ρ=0.61,
p<0.001). A Spearman’s rho correlation analysis
found a moderate negative correlation between ‘Risks
Associated with Generative AI’ and ‘Purpose of
Using Generative AI’ (ρ=-0.32, p=0.0042),
‘Capabilities of Using Generative AI’ (ρ=-0.495,
p<0.001), andOverall Impact Rate of AI (ρ=-0.52,
p<0.001).
There was no significant correlation between
‘Risks Associated with Generative AI’ and ‘Training
Needs in AI.’ Similarly, a weak positive correlation
was found between ‘Training Needs in AI’ and
‘Overall Impact Rate of AI’ (ρ=0.26, p=0.0175).
However, no significant correlation was found
between the ‘Workload Dynamics of Academic Staff’
and other relevant variables.
The Wilcoxon rank-sum test and the Kruskal-
Wallis rank-sum test were used to assess statistically
significant differences between groups based on
institution, teaching discipline (engineering or social
sciences/humanities), teaching experience, and
whether the individuals were AI trained or not. It was
found that there were no statistically significant
differences between the groups based on institution
and teaching experience with respect to all relevant
variables. Additionally, the Wilcoxon rank sum test
revealed that the total scores for ‘Risks Associated
with Generative AI’ differ significantly according to
the lecturers’ teaching discipline (W=891,
p=0.01905). Specifically, the test results indicate that
the total scores for Risks Associated with Generative
AI’ were higher for lecturers who teach engineering
disciplines than for those who teach social sciences or
humanities.
Table 3: Descriptive Statistics of the relevant scale variables and the overall impact rate of AI (n=81).
Variable M SD MIN 25% 50% 75% MAX Skewness Kurtosis
Purpose of Using AI 13.53 5.86 0 10 13 17 27 -0.17 0.24
Capabilities of Using AI 27.54 5.51 13 24 28 32 38 -0.36 0.15
Risks Associated with AI 33.37 6.80 14 30 34 38 45 -0.69 0.47
Training Needs in AI 47.54 5.67 31 44 48 52 55 -0.82 0.42
Workload Dynamic of Acad. Staff 6.57 4.88 0 2 7 11 18 0.03 -1.26
Overall Impact Rate of AI 3.49 0.88 1 3 4 4 5 -0.71 -0.15
Table 4: Spearman’s rho coefficients for the relevant scale variables and the overall impact rate of AI (n=81).
Purpose of
Using AI
Capabilities
of Using AI
Risks
Associated
with AI
Training
Needs in
AI
Workload
Dynamic of
Academic Staff
Overall
Impact
Rate of AI
Purpose of Using AI 1
Capabilities of Using AI 0.45*** 1
Risks Associated with AI -0.32** -0.495*** 1
Training Needs in AI 0.29** 0.30** 0.07 1
Workload Dynamic of
Academic Staff
0.19 0.08 0.001 -0.04 1
Overall Impact Rate of AI 0.49*** 0.61*** -0.52*** 0.26* 0.03 1
***p<0.001 **p<0.01 *p<0.05
CSEDU 2025 - 17th International Conference on Computer Supported Education
88
Table 5: Means (M) and standard deviations (SD) for the ‘Training Needs in AI’ items (n=81).
Training for academic staff should focus on the following topics M SD Conclusion
Practical recommendations for using text bots in planning and organizing teaching,
including evaluation an
d
feedbac
k
4,49 0,63 Rather Agree
Text bots (including those that generate images, videos, and audio) and academic
frau
d
4,46 0,78 Rather Agree
Learning methods in the age of AI 4,38 0,66 Rather Agree
The future of the labor market and the skills required by graduates 4,21 0,89 Rather Agree
Enhancing the effectiveness of teaching staff using AI tools 4,48 0,63 Rather Agree
Legal and ethical aspects of data protection and the use of text, image, video, and
audio generators
4,43 0,79 Rather Agree
Equal treatment of students in using generative AI tools 4,15 1,00 Rather Agree
The risks associated with the use of generative AI tools for humanity (including
ne
g
ative environmental im
p
acts
)
3,98 1,12 Rather Agree
The use of text bots in scientific research 4,26 0,92 Rather Agree
The use of text bots in project work 4,27 0,84 Rather Agree
There is no need for additional training (the reverse scale is used) 4,43 1,01 Rather Disagree
The Wilcoxon rank sum test also found that the
total scores for ‘Risks Associated with Generative
AI’ (W=1139.5, p=0.00245) and ‘Capabilities of
Using Generative AI’ (W=606, p=0.04393) differ
significantly between lecturers who had previously
attended AI training and those who had not.
Specifically, lecturers trained in AI perceive fewer
risks and more opportunities in using generative AI
in teaching compared to those who are untrained.
Finally, we collected information on which
topics faculty believe should receive the most
attention in training. The ‘Training Needs in AI’
scale consists of items measured on a 5-point Likert
scale: 1 ‘Strongly Disagree,’ 2 ‘Disagree,’ 3
‘Neutral,’ 4 ‘Agree,’ and 5 ‘Strongly Agree.’
Descriptive statistics for the relevant scale variables
were calculated and are presented in Table 5.
5 DISCUSSIONS
The findings of this study provide detailed insights
into the use and perceptions of GenAI among
faculty members at TTK UAS, which align closely
with trends observed in the broader academic
literature. The existing literature emphasizes the
potential for AI tools to transform higher education,
particularly in areas such as personalized learning
and automated assessments (Ward et al., 2024;
Safiulina et al., 2024). Similarly, the findings of this
study indicate that faculty members at TTK UAS
recognize the potential of GenAI tools in preparing
teaching materials (M=2.00, SD=0.58) and
conducting research (M=1.82, SD=0.56).
The literature emphasizes the importance of
addressing ethical concerns and ensuring the
transparency of AI systems (Holmes & Miao, 2023;
Neumann, Rauschenberger, & Schön, 2023). The
current study reinforces these findings by
identifying reliability (40.7%) and ethical concerns
(35.8%) as significant barriers to the adoption of
GenAI tools. The survey results also highlight
practical challenges, such as a lack of training and
guidance, which align with previous studies (Chiu
et al., 2023; Lepik, 2024; Safiulina et al., 2024;
Ward et al., 2024) advocating for structured
institutional support and professional development
programs.
While the literature (Chiu et al., 2023; Lee et al.,
2024; Neumann et al., 2023; Ward et al., 2024)
underscores the efficiency gains associated with AI
in educational settings, the workload dynamics
observed in this study present a more complex
picture. Although AI tools are seen as beneficial in
streamlining certain tasks, their impact on workload
appears to vary across different academic activities.
The observed slight increase in workload related to
the Assessment of Student Work, as reported in this
study, may reflect the additional effort required to
evaluate AI-generated student submissions. In this
context, addressing workload concerns underscores
the importance of employing appropriate
pedagogical frameworks and implementing more
nuanced strategies for integrating AI tools, ensuring
that faculty workloads are optimized without
introducing additional burdens.
One of the key contributions of this study is the
identification of discipline-specific differences in
perceptions of GenAI risks and opportunities.
Practices, Challenges, and Training Needs of Faculty in Terms of Generative AI
89
Engineering faculty members, for instance, reported
higher levels of concern regarding AI risks
compared to their counterparts in social sciences
and humanities. This aligns with previous research
suggesting that the perceived applicability and risks
of AI tools can vary widely depending on the
disciplinary context (Rahman & Watanobe, 2023).
The findings also reveal significant correlations
between faculty perceptions of GenAI capabilities,
risks, and training needs. Faculty members with
prior AI training were more likely to perceive
opportunities and fewer risks associated with
GenAI. This underscores the critical role of training
in shaping positive attitudes and facilitating the
effective integration of AI tools in teaching and
research.
Overall, this study extends the existing literature
by providing localized insights into the challenges
and opportunities associated with GenAI in higher
education. Challenges, particularly in guiding
students and providing personalized learning
experiences, can be addressed through well-
designed training programs that support faculty, as
the literature suggests that GenAI tools can play a
critical role in assisting vulnerable student groups,
including those with learning disabilities (Lee et al.,
2024). While 45.7% of TTK UAS faculty allow
students to use AI without adapting teaching
practices—aligning with findings from the
University of Tartu study, where 22.03% reported
similar permissiveness—this may reflect a lack of
awareness or readiness to address associated risks
and their underlying causes. In this context,
practical, goal-oriented training on integrating
GenAI tools into teaching, in alignment with
academic and ethical values, is essential, as outright
prohibition is neither practical nor sustainable.
The study also reveals that generative AI tools
are moderately utilized in research-related tasks and
project documentation, yet faculty express a desire
for additional training in these areas, indicating
untapped potential for streamlining such activities.
Faculty at TTK UAS have expressed strong interest
in actionable, discipline-specific guidance,
emphasizing the need for institutional efforts to
prioritize these areas to fully harness AI’s potential
while mitigating risks and promoting equitable
educational practices.
These findings underscore the importance of
developing tailored educational programs and
institutional policies that address both the ethical
and practical dimensions of AI integration.
6 CONCLUSIONS
This study contributes to the growing body of
research on the integration of generative AI in
higher education by providing empirical evidence
on faculty perceptions, usage patterns, and training
needs at TTK UAS. The findings highlight that
while GenAI tools have significant potential to
enhance teaching, research, and administrative
tasks, their effective implementation requires
addressing key challenges related to ethical
concerns and AI literacy.
The study underscores the need for targeted
training programs to equip faculty with the skills
and knowledge needed to effectively use AI tools
while adhering to ethical guidelines. By
emphasizing discipline-specific needs, institutions
can ensure that AI tools are integrated in ways that
enhance educational quality without compromising
academic integrity.
The identification of correlations between
training, perceptions of risks, and opportunities
suggests that increasing access to AI training could
play a pivotal role in overcoming barriers to
adoption. Additionally, the underutilization of AI in
student-focused tasks indicates a need for further
exploration of how these tools can enhance student
engagement and learning outcomes.
Future research should examine the longitudinal
impact of AI integration on both faculty workload
and student learning outcomes. Additionally, there
is a need for comparative studies across institutions
to identify best practices and develop standardized
frameworks for the ethical and effective use of AI
in higher education. Such efforts help higher
education institutions manage the challenges of
adopting AI, leading to a more inclusive and
innovative learning environment.
REFERENCES
Castelli, M., & Manzoni, L. (2022). Generative models in
artificial intelligence and their applications. Applied
Sciences, 12(9), 4127.
Chiu, T. K., Xia, Q., Zhou, X., Chai, C. S., & Cheng, M.
(2023). Systematic literature review on opportunities,
challenges, and future research recommendations of
artificial intelligence in education. Computers and
Education: Artificial Intelligence, 4, 100118.
Holmes, W., & Miao, F. (2023). Guidance for generative
AI in education and research. UNESCO Publishing.
Kung, T. H., Cheatham, M., Medenilla, A., Sillos, C., De
Leon, L., Elepaño, C., ... & Tseng, V. (2023).
Performance of ChatGPT on USMLE: potential for
CSEDU 2025 - 17th International Conference on Computer Supported Education
90
AI-assisted medical education using large language
models. PLoS digital health, 2(2), e0000198.
Lee, D., Arnold, M., Srivastava, A., Plastow, K., Strelan,
P., Ploeckl, F., ... & Palmer, E. (2024). The impact of
generative AI on higher education learning and
teaching: A study of educators’ perspectives.
Computers and Education: Artificial Intelligence, 6,
100221.
Lepik, K. (2024, May 17). Tartu Ülikooli õppejõudude
praktikad ja probleemid seoses tekstirobotite
kasutamisega õppetöös [Presentation slides]. E-
kursuse kvaliteedimärgi lõpuseminar, Tallinn.
Lund, B. D., Wang, T., Mannuru, N. R., Nie, B., Shimray,
S., & Wang, Z. (2023). ChatGPT and a new academic
reality: Artificial Intelligence‐written research papers
and the ethics of the large language models in
scholarly publishing. Journal of the Association for
Information Science and Technology, 74(5), 570-581.
Neumann, M., Rauschenberger, M., & Schön, E. M.
(2023, May). “We need to talk about ChatGPT”: The
future of AI and higher education. In 2023 IEEE/ACM
5th International Workshop on Software Engineering
Education for the Next Generation (SEENG) (pp. 29-
32). IEEE.
Obaid, O. I., Ali, A. H., & Yaseen, M. G. (2023). Impact
of Chat GPT on Scientific Research: Opportunities,
Risks, Limitations, and Ethical Issues. Iraqi Journal
for Computer Science and Mathematics, 4(4), 13-17.
Peres, R., Schreier, M., Schweidel, D., & Sorescu, A.
(2023). On ChatGPT and beyond: How generative
artificial intelligence may affect research, teaching,
and practice. International Journal of Research in
Marketing, 40(2), 269-275.
Rahman, M. M., & Watanobe, Y. (2023). ChatGPT for
education and research: Opportunities, threats, and
strategies. Applied Sciences, 13(9), 5783.
Safiulina, E., Labanova, O., Uukkivi, A., Petjärv, B., &
Vilms, M. (2024). Integrating Artificial Intelligence
in Higher Education: A Literature Review of Current
Trends, Challenges, and Future Directions.
ICERI2024 Proceedings (In print)
Strzelecki, A. (2023). To use or not to use ChatGPT in
higher education? A study of students’ acceptance
and use of technology. Interactive learning
environments, 1-14.
Ward, D., Loshbaugh, H. G., Gibbs, A. L., Henkel, T.,
Siering, G., Williamson, J., & Kayser, M. (2024).
How Universities Can Move Forward With
Generative AI in Teaching and Learning. Change:
The Magazine of Higher Learning, 56(1), 47-54.
Yusuf, A., Pervin, N., & Román-González, M. (2024).
Generative AI and the future of higher education: a
threat to academic integrity or reformation? Evidence
from multicultural perspectives. International
Journal of Educational Technology in Higher
Education, 21(1), 21.
Practices, Challenges, and Training Needs of Faculty in Terms of Generative AI
91