Authors:
Vijay Kumari
and
Yashvardhan Sharma
Affiliation:
Birla Institute of Technology and Science, Pilani, Rajasthan, India
Keyword(s):
Natural Language Processing, Machine Learning, Subjective Answer Evaluation, Learning Assessments.
Abstract:
The evaluation of answer scripts is vital for assessing a student’s performance. The manual evaluation of the answers can sometimes be biased. The assessment depends on various factors, including the evaluator’s mental state, their relationship with the student, and their level of expertise in the subject matter. These factors make evaluating descriptive answers a very tedious and time-consuming task. Automatic scoring approaches can be utilized to simplify the evaluation process. This paper presents an automated answer script evaluation model that intends to reduce the need for human intervention, minimize bias brought on by evaluator psychological changes, save time, maintain track of evaluations, and simplify extraction. The proposed method can automatically weigh the assessing element and produce results nearly identical to an instructor’s. We compared the model’s grades to the grades of the teacher, as well as the results of several keyword matching and similarity check techniqu
es, in order to evaluate the developed model.
(More)