7 CONCLUSIONS
Balancing fears (e.g., fear that using AI tools may be
perceived as academic dishonesty, leading to lower
grades, unfair grading by AI-based correction tools)
and potential positive effects (e.g., free use of
ChatGPT as a powerful tool for academic studies) is
essential for responsible AI integration in education.
The overarching ethical aspect 'transparency' is
crucial in addressing these concerns and ensuring
responsible AI integration in education. Additionally,
the ethical principle of 'fairness' is central to
discussions about equal access, the impact on hard
work, and the potential biases associated with AI
tools. To alleviate concerns and promote responsible
AI usage in education, universities should provide
clear guidelines, educational resources, and open
discussions to empower students to make informed
decisions and navigate the evolving landscape of AI
in academia.
Limited communication or education around the
ethical and practical use of AI tools in education can
contribute to these concerns. Students may feel that
they lack guidance on how to navigate this issue
responsibly.
Developing norms and guidelines for the ethical
use of generative AI for academic writing currently
presents a significant and complex challenge for
universities. The requirement to label AI-generated
content in academic work can contribute to
strengthening and upholding ethical, academic, and
pedagogical standards. Clear marking helps preserve
academic integrity by distinguishing between
students' own work and machine-generated content
(Boyd-Graber et al., 2023). It aids in adhering to
ethical standards in academic work. Teachers can
better assess the quality of AI-generated content and
evaluate how well students use and understand these
AI systems. This measure could also promote
students' awareness of responsible AI use and its
impact on their learning processes.
However, the results of our studies reveal
substantial arguments against labelling AI-generated
passages in academic work. Labelling could
stigmatize the use of AI in academic work, implying
that its use is inherently less valuable or legitimate.
Mandatory labelling could discourage students from
exploring and using new technologies, inhibiting
technology acceptance and the development of
necessary AI-related competencies. Regarding
human contribution, defining precisely what
constitutes AI-generated content may be challenging,
especially when students heavily edit and customize
AI outputs. Demanding labelling could be interpreted
as distrust in students' ability to handle AI
independently and responsibly. From students'
perspective, there is also a valid concern that open
communication about using AI in their work might
lead to less favourable evaluations or a loss of trust
on the part of teachers.
A significant dilemma appears between
establishing ethical academic integrity standards by
declaring ChatGPT-generated outputs and nurturing
students' AI competencies to learn how to utilize AI
tools effectively. In further research efforts, we aim
to delve deeper into this student perspective to
explore solutions that enable AI's ethical and
responsible use in higher education while
simultaneously supporting the development of
necessary AI competencies rather than hindering
them.
REFERENCES
Alshami, A., Elsayed, M., Ali, E., Eltoukhy, A. E. E., &
Zayed, T. (2023). Harnessing the Power of ChatGPT
for Automating Systematic Review Process:
Methodology, Case Study, Limitations, and Future
Directions. Systems, 11(7), 351. https://doi.org/10.
3390/systems11070351
Bao, L., Krause, N. M., Calice, M. N., Scheufele, D. A.,
Wirz, C. D., Brossard, D., Newman, T. P., &
Xenos, M. A. (2022). Whose AI? How different publics
think about AI and its social impacts. Computers in
Human Behavior, 130, 107182. https://doi.org/10.
1016/j.chb.2022.107182
Boyd-Graber, J., Okazaki, N., & Rogers, A. (2023). ACL
2023 policy on AI writing assistance. https://2023.
aclweb.org/blog/ACL-2023-policy/
Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2023).
Chatting and cheating: Ensuring academic integrity in
the era of ChatGPT. Innovations in Education and
Teaching International, 1–12. https://doi.org/10.
1080/14703297.2023.2190148
Cronbach, L. J. (1951). Coefficient alpha and the internal
structure of tests. Psychometrika, 16(3), 297–334.
https://doi.org/10.1007/BF02310555
Dang, J., & Liu, L. (2022). Implicit theories of the human
mind predict competitive and cooperative responses to
AI robots. Computers in Human Behavior, 134, 107300.
https://doi.org/10.1016/j.chb.2022.107300
Dell'Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-
Assaf, H., Kellogg, K., Rajendran, S., Krayer, L.,
Candelon, F., & Lakhani, K. R. (2023). Navigating the
Jagged Technological Frontier: Field Experimental
Evidence of the Effects of AI on Knowledge Worker
Productivity and Quality. SSRN Electronic Journal.
Advance online publication. https://doi.org/10.
2139/ssrn.4573321