Detection of Inconsistencies in Student Evaluations

Štefan Pero, Tomáš Horváth

2013

Abstract

Evaluation of the solutions for the tasks or projects solved by students is a complex process driven mainly by the subjective evaluation criteria of a given teacher. Each teacher is somehow biased meaning how strict she is in assessing grades to solutions. Besides the teacher’s bias there are also some other factors contributing to grading, for example, teachers can make mistakes, the grading scale is too rough-grained or too fine-grained, etc. Grades are often provided together with teacher’s textual evaluations which are considered to be more expressive as a single number. Such textual evaluations, however, should be consistent with grades, meaning that if two solutions have very similar textual evaluations their grades should be also very similar. Though, some inconsistencies between textual evaluations and grades provided by the teacher used to arise, especially, when a teacher has to assess a large number of solutions, or if more than one teacher is involved in the evaluation process. We propose a simple approach for detection of inconsistencies between textual evaluations and grades in this paper. Experiments are provided on two real-world datasets collected from the teaching process at our university.

References

  1. Banta, T. W., Jones, E. A., and Black, K. E. (2009). Designing Effective Assessment: Principles and Profiles of Good Practice. John Wiley and Sons, 2nd edition.
  2. Beck, H. P., Rorrer-Woody, S., and Pierce, L. G. (1991). The relations of learning and grade orientations to academic performance. In Teaching of Psychology 18, pages 35-37.
  3. Carell, S. E. and West, J. E. (2010). Random assignment of students to professors. Journal of Political Economy.
  4. Holzinger, A., Yildirim, P., Geier, M., and Simonic, K.- M. (2012). Quality-based knowledge discovery from medical text on the Web Example of computational methods in Web intelligence. Springer.
  5. Jindal, N. and Liu., B. (2008). Opinion spam and analysis. In Proceedings of First ACM International Conference on Web Search and Data Mining. ACM New York, USA.
  6. Kohn, A. (1999). From degrading to de-grading. Rev. ed. Boston: Houghton Mifflin.
  7. Koren, Y., Bell, R., and Volinsky, C. (2009). Matrix factorization techniques for recommender systems. Computer, 42(8):30-37.
  8. Milton, O. (2009). Making Sense of College Grades: Why the Grading System Does Not Work and What Can be Done About It. San Francisco: Jossey-Bass.
  9. Milton, O., H. R., P., and J. A., E. (1986). Making Sense of College Grades. San Francisco: Jossey-Bass.
  10. Petz, G., Karpowicz, M., Frschu, H., Auinger, A., Winkler, S., Schaller, S., and Holzinger, A. (2012). On text preprocessing for opinion mining outside of laboratory environments. In Active Media Technology, pages 618-629. Springer.
  11. Ramos, J. (2003). Using tf-idf to determine word relevance in document queries.
  12. Robertson, S. (2004). Understanding inverse document frequency: On theoretical arguments for idf. In Journal of Documentation, volume 60.
  13. Rockoff, J. and Speroni, C. (2010). Subjective and objective evaluations of teacher effectiveness. In Labour Economics, Volume 18, Issue 5, pages 687-696.
  14. Spärck Jones, K. (1972). A statistical interpretation of term specificity and its application in retrieval.
  15. Suskie, L. (2009). Assessing Student Learning: A common sense guide. Malden: Jossey-Bass A Wiley Imprint, San Francisco, 2nd edition.
  16. Tan, P.-N., Steinbach, M., and Kumar, V. (2005). Introduction to Data Mining. Addison-Wesley.
  17. Walvoord, B. E. and Anderson, V. J. (2009). Effective Grading: A Tool for Learning and Assessment in College. Paperback, 2nd edition.
  18. Wu, H. C., Luk, R. W. P., Wong, K. F., and Kwok, K. L. (2008). Interpreting tf-idf term weights as making relevance decisions. ACM Trans. Inf. Syst., 26(3):13:1- 13:37.
Download


Paper Citation


in Harvard Style

Pero Š. and Horváth T. (2013). Detection of Inconsistencies in Student Evaluations . In Proceedings of the 5th International Conference on Computer Supported Education - Volume 1: CSEDU, ISBN 978-989-8565-53-2, pages 246-249. DOI: 10.5220/0004385602460249


in Bibtex Style

@conference{csedu13,
author={Štefan Pero and Tomáš Horváth},
title={Detection of Inconsistencies in Student Evaluations},
booktitle={Proceedings of the 5th International Conference on Computer Supported Education - Volume 1: CSEDU,},
year={2013},
pages={246-249},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0004385602460249},
isbn={978-989-8565-53-2},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 5th International Conference on Computer Supported Education - Volume 1: CSEDU,
TI - Detection of Inconsistencies in Student Evaluations
SN - 978-989-8565-53-2
AU - Pero Š.
AU - Horváth T.
PY - 2013
SP - 246
EP - 249
DO - 10.5220/0004385602460249