Large Language Models (GPT) Struggle to Answer Multiple-Choice Questions About Code
Jaromir Savelka, Arav Agarwal, Christopher Bogart, Majd Sakr
2023
Abstract
We analyzed effectiveness of three generative pre-trained transformer (GPT) models in answering multiple- choice question (MCQ) assessments, often involving short snippets of code, from introductory and interme- diate programming courses at the postsecondary level. This emerging technology stirs countless discussions of its potential uses (e.g., exercise generation, code explanation) as well as misuses in programming educa- tion (e.g., cheating). However, the capabilities of GPT models and their limitations to reason about and/or analyze code in educational settings have been under-explored. We evaluated several OpenAI’s GPT models on formative and summative MCQ assessments from three Python courses (530 questions). We found that MCQs containing code snippets are not answered as successfully as those that only contain natural language. While questions requiring to fill-in a blank in the code or completing a natural language statement about the snippet are handled rather successfully, MCQs that require analysis and/or reasoning about the code (e.g., what is true/false about the snippet, or what is its output) appear to be the most challenging. These findings can be leveraged by educators to adapt their instructional practices and assessments in programming courses, so that GPT becomes a valuable assistant for a learner as opposed to a source of confusion and/or potential hindrance in the learning process.
DownloadPaper Citation
in Harvard Style
Savelka J., Agarwal A., Bogart C. and Sakr M. (2023). Large Language Models (GPT) Struggle to Answer Multiple-Choice Questions About Code. In Proceedings of the 15th International Conference on Computer Supported Education - Volume 2: CSEDU, ISBN 978-989-758-641-5, SciTePress, pages 47-58. DOI: 10.5220/0011996900003470
in Bibtex Style
@conference{csedu23,
author={Jaromir Savelka and Arav Agarwal and Christopher Bogart and Majd Sakr},
title={Large Language Models (GPT) Struggle to Answer Multiple-Choice Questions About Code},
booktitle={Proceedings of the 15th International Conference on Computer Supported Education - Volume 2: CSEDU,},
year={2023},
pages={47-58},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011996900003470},
isbn={978-989-758-641-5},
}
in EndNote Style
TY - CONF
JO - Proceedings of the 15th International Conference on Computer Supported Education - Volume 2: CSEDU,
TI - Large Language Models (GPT) Struggle to Answer Multiple-Choice Questions About Code
SN - 978-989-758-641-5
AU - Savelka J.
AU - Agarwal A.
AU - Bogart C.
AU - Sakr M.
PY - 2023
SP - 47
EP - 58
DO - 10.5220/0011996900003470
PB - SciTePress