Authors:
Dimos Triantis
;
Charalampos Stergiopoulos
and
Panagiotis Tsiakas
Affiliation:
Technological Educational Institution (T.E.I.) of Athens, Greece
Keyword(s):
Computer-aided assessment, Evaluation methodologies, Automated grading, Post-secondary education, Evaluation of CAL systems.
Related
Ontology
Subjects/Areas/Topics:
Assessment Methods in Blended Learning Environments
;
Assessment Software Tools
;
Computer-Aided Assessment
;
Computer-Supported Education
;
e-Learning
;
e-Learning Hardware and Software
;
e-Learning Platforms
;
e-Testing and Test Theories
;
Learning/Teaching Methodologies and Assessment
;
Simulation and Modeling
;
Simulation Tools and Platforms
Abstract:
The aim of this work was to compare the use of multiple-choice questions (MCQs) as an examination method, to the one based on oral-response questions (ORQs). The MCQs have an advantage concerning objectivity in the grading process and speed in production of results. But they also introduce an error in the final formulation of the score. The error concerns the probability of answering a question by chance or based on an instinctive feeling. In the present study, both MCQ and ORQ tests were given to examinees, in the framework of a computer-based learning system. Avoiding the procedure of mixed scoring, e.g. both positive and negative markings, a set of pairs of MCQs was composed. The MCQs in each pair were similar, produced by the same topic. This similarity was not evident for an examinee without adequate knowledge on the particular topic. The examination based on these “paired” MCQs, by using a suitable scoring rule, when made to the same sample of students, οn the same topics and w
ith the same levels of difficulty, gave results that were statistically indistinguishable with the grades produced by an examination based on ORQs, while both the “paired” MCQ test results and the ORQ test results differed significantly from those obtained from a MCQ using positive-only scoring rule.
(More)