Authors:
Rina Azoulay
1
;
Tirza Hirst
1
and
Shulamit Reches
2
Affiliations:
1
Department of Computer Science, Jerusalem College of Technology, Jerusalem, Israel
;
2
Department of Mathematics, Jerusalem College of Technology, Jerusalem, Israel
Keyword(s):
ChatGPT, Large Language Models, Computer Science Education, Plagiarism, Integrity, LLMs.
Abstract:
This research addresses the profound challenges presented by sophisticated large language models (LLMs) like ChatGPT, especially in the context of educational settings, focusing on computer science and programming instruction. State of the art LLMs are capable of generating solutions for standard exercises that are assigned to students to bolster their analytical and programming skills. However, the ease of using AI to generate programming solutions poses a risk to the educational process and skill development, as it may lead students to depend on these solutions instead of engaging in their own problem-solving efforts. Our study suggests collaborative methods involving computer science educators and AI developers to provide evaluators with tools to distinguish between code produced by ChatGPT and code genuinely created by students. We propose a novel steganography-based technique for watermarking AI-generated code. By implementing this comprehensive strategy and effectively utilizin
g such technology through the combined efforts of educators, course administrators, and partnerships with AI developers, we believe it is possible to preserve the integrity of programming education in an age increasingly influenced by LLMs capable of generating code.
(More)