Authors:
Yûsei Kido
1
;
Hiroaki Yamada
1
;
Takenobu Tokunaga
1
;
Rika Kimura
2
;
Yuriko Miura
2
;
Yumi Sakyo
2
and
Naoko Hayashi
2
Affiliations:
1
School of Computing, Tokyo Institute of Technology, Japan
;
2
Graduate School of Nursing Science, St. Luke’s International University, Japan
Keyword(s):
Large Language Models, National Nursing Examination, Distractor Generation, Multiple-Choice Questions, Automatic Question Generation.
Abstract:
This paper introduces our ongoing research project that aims to generate multiple-choice questions for the Japanese National Nursing Examination using large language models (LLMs). We report the progress and prospects of our project. A preliminary experiment assessing the LLMs’ potential for question generation in the nursing domain led us to focus on distractor generation, which is a difficult part of the entire questiongeneration process. Therefore, our problem is generating distractors given a question stem and key (correct choice). We prepare a question dataset from the past National Nursing Examination for the training and evaluation of LLMs. The generated distractors are evaluated with compared to the reference distractors in the test set. We propose reference-based evaluation metrics for distractor generation by extending recall and precision, which is popular in information retrieval. However, as the reference is not the only acceptable answer, we also conduct human evaluatio
n. We evaluate four LLMs: GPT-4 with few-shot learning, ChatGPT with few-shot learning, ChatGPT with fine-tuning and JSLM with fine-tuning. Our future plan includes improving the LLMs’ performance by integrating question writing guidelines into the prompts to LLMs and conducting a large-scale administration of automatically generated questions.
(More)