within the space of k=0.26 and k=0.34 having a
minimum at k=0.30. This value of k parameter is the
optimum one to apply to the scoring rule. In the
same figure the relation of R
2
to parameter k is also
shown. It is observed that the maximum value of R
2
was also found for k=0.3. This value which seems to
be the optimal one and has been also observed
during the implementation of this method to other
modules already published (Ventouras et al., 2010;
Triantis & Ventouras., 2011) and optimses the
students’ overall score in a way that they objectively
reflect their level of knowledge.
6 CONCLUSIONS
Electronic examinations supported by special
software tools are very helpful for the educational
process as they provide the means for the automatic
production of the results and the ability to easily
apply different scoring rules. This way the lecturer
can have a clear image of the results which may be
used for optimizing the way of teaching and
disseminated material.
During the comparison of the CRQs examination
method and the MCQs examination method it was
observed that the classic scoring rule of positive
score for correct answers introduced a bias due to
the failure of eliminating the “guessing” factor, a
common phenomenon of MCQs examinations.
Therefore, such a simple scoring rule cannot
advance MCQs examination for potentially
substituting a CQRs examination method.
Nevertheless, by applying a scoring rule that
introduces the use of a special parameter that its
value is added or subtracted to the overall score
according to the correct or wrong answers along
with the concept of pairs of questions addressed the
same topic, can give results that are very close to the
ones produced by the CQRs method. To the extent
of the results of the present study, indication is
provided that a value of k parameter approximately
equal to 0.3 can optimally give results that clearly
and objectively reflect the level of student’s
knowledge.
The key factor for applying this rule is a
thorough preparation of the questions from the
examiner in such a way that they cover all topics of
interest and can form pairs in a way that their
relation to a specific topic will not be evident to a
student that is not well prepared.
Part of future work will be the research on results
when assigning different values to k
bonus
and k
penalty
parameters, respectively. During this research an
algorithm might also designed for enhancing the
electronic examination application by automatically
selecting the optimized value of k parameter. The
scoring rule has to be tested in other modules as well
in order to further verify its usefulness as an
objective evaluation tool.
REFERENCES
Bereby-Meyer, Y., Meyer, J., & Flascher, O. M. 2002.
Prospect theory analysis of guessing in multiple choice
tests. Journal of Behavioral Decision Making, 15(4),
pp. 313-327.
Bereby-Meyer, Y., Meyer, J., & Budescu, D. V. 2003.
Decision making under internal uncertainty: The case
of multiple-choice tests with different scoring
rules. Acta psychologica, 11(2), pp. 207-220.
Bush, M. E. 2006. Quality assurance of multiple-choice
tests. Quality Assurance in Education, 14(4), pp. 398-
404.
DeBord, K. A., Aruguete, M. S., & Muhlig, J. 2004. Are
computer-assisted teaching methods effective?,
Teaching of Psychology, 31(1), pp. 65-68.
Dede, C. 2005. Planning for neomillennial learning styles.
Educause Quarterly, 28(1), 7-12.
Friedl, R., Höppler, H., Ecard, K., Scholz, W., Hannekum,
A., Öchsner, W., & Stracke, S. 2006. Multimedia-
driven teaching significantly improves students’
performance when compared with a print medium, The
Annals of thoracic surgery, 81(5), pp. 1760-1766.
Freeman, R., & Lewis, R. 1998. Planning and
implementing assessment. Routledge.
Lukhele, R., Thissen, D., & Wainer, H. 1994. On the
Relative Value of MultipleChoice, Constructed
Response, and ExamineeSelected Items on Two
Achievement Tests, Journal of Educational
Measurement, 31(3), pp. 234-250.
Reiser, R. A., & Dempsey, J. V. 2011. Trends and issues
in instructional design and technology. Pearson.
Scharf, E. M., & Baldwin, L. P. 2007. Assessing multiple
choice question (MCQ) tests-a mathematical
perspective, Active Learning in Higher Education,
8(1), pp. 31-47.
Stergiopoulos, C., Tsiakas, P., Triantis, D., & Kaitsa, M.
2006. Evaluating Electronic Examination Methods
Applied to Students of Electronics. Effectiveness and
Comparison to the Paper-and-Pencil Method.
In Sensor Networks, Ubiquitous, and Trustworthy
Computing, 2006. IEEE International Conference on ,
2, pp. 143-151..
Tsiakas, P., Stergiopoulos, C., Nafpaktitis, D., Triantis, D.,
& Stavrakas, I. 2007. Computer as a tool in teaching,
examining and assessing electronic engineering
students. In EUROCON’07, The International
Conference on "Computer as a Tool", pp. 2490-2497.
Triantis, D., & Ventouras, E. 2011. Enhancing Electronic
Examinations through Advanced Multiple-Choice
Questionnaires. Higher Education Institutions and
ComparingElectronicExaminationMethodsforAssessingEngineeringStudents-TheCaseofMultiple-ChoiceQuestions
andConstructedResponseQuestions
131