The Effect of Peer Assessment Rubrics on Learners' Satisfaction and Performance Within a Blended MOOC Environment

Ahmed Mohamed Fahmy Yousef, Usman Wahid, Mohamed Amine Chatti, Ulrik Schroeder, Marold Wosnitza

2015

Abstract

Massive Open Online Courses (MOOCs) have a remarkable ability to expand access to a large scale of participants worldwide, beyond the formality of the higher education systems. MOOCs support participants to be actively involved in collaborative learning and construct their own learning experience in a variety of domains. However, one of the biggest challenges facing MOOCs is how to assess the learners’ performance in a massive learning environment beyond traditional automated assessment methods. To address this challenge, peer assessment has been proposed as an effective assessment method in MOOCs. The problem is, however, how to ensure the quality of the peer assessment in terms of validity and reliability. Moreover, assessment in blended MOOCs (bMOOCs) introduces unique challenges regarding the best peer assessment model in a learning environment that brings together face-to-face interactions and online activities. This paper presents the details of a study conducted to investigate peer assessment in bMOOCs. The study results show that flexible rubrics have the potential to make the feedback process more accurate, credible, transparent, valid, and reliable, thus ensuring the quality of the peer assessment task.

References

  1. Argyris, C., & Schon, D. (1978). Organizational learning: A theory of action approach. Reading, MA: Addision Wesley.
  2. Brindley, C., & Scoffield, S. (1998). Peer assessment in undergraduate programmes. Teaching in higher education, 3(1), 79-90.
  3. Bruff, D. O., Fisher, D. H., McEwen, K. E., & Smith, B. E. (2013). Wrapping a MOOC: Student perceptions of an experiment in blended learning. MERLOT Journal of Online Learning and Teaching, 9(2), 187-199.
  4. Chatti, M. A., Jarke, M., & Schroeder, U. (2012). Doubleloop learning. Encyclopedia of the sciences of learning, 1035-1037.
  5. Chatti, M. A. (2010) The LaaN Theory. In: Personalization in Technology Enhanced Learning: A Social Software Perspective. Aachen, Germany: Shaker Verlag, pp. 19-42.
  6. Chatti, M. A., Lukarov, V., Thüs, H., Muslim, A., Yousef, A. M. F., Wahid, U., Greven, C., Chakrabarti, A., Schroeder, U. (2014). Learning Analytics: Challenges and Future Research Directions. eleed, Iss. 10.
  7. Coursera. (2015) How will my grade be determined? Retrieved on 20th of January, 2015 from, http://help.coursera.org/customer/portal/articles/11633 04-how-will-my-grade-be-determined-
  8. Daniel, J. (2012). Making sense of MOOCs: Musings in a maze of myth, paradox and possibility. Journal of Interactive Media in Education, 3.
  9. Davis, H., Dikens, K., Leon-Urrutia, M., Sanchéz-Vera, M. M., & White, S. (2014). MOOCs for Universities and Learners an analysis of motivating factors. In Proc. CSEDU 2014 conference, pp. 105-116. INSTICC, 2014.
  10. Díez, J., Luaces, O., Alonso-Betanzos, A., Troncoso, A., & Bahamonde, A. (2013, December). Peer assessment in MOOCs using preference learning via matrix factorization. In NIPS Workshop on Data Driven Education.
  11. edX. (2015). Open Response Assessments. Retrieved on 20th of January, 2015 from, http://edx-guide-forstudents.readthedocs.org/en/latest/SFD_ORA.html.
  12. Gielen, S., Peeters, E., Dochy, F., Onghena, P., & Struyven, K. (2010). Improving the effectiveness of peer feedback for learning. Learning and Instruction, 20(4), 304-315.
  13. Grünewald, F., Meinel, C., Totschnig, M., & Willems, C. (2013). Designing MOOCs for the Support of Multiple Learning Styles. In Scaling up Learning for Sustained Impact (pp. 371-382). Springer Berlin Heidelberg.
  14. Hill, P. (2013). Some validation of MOOC student patterns graphic. From: http://mfeldstein.com/validation-mooc-studentpatterns-graphic/
  15. Jordan, K. (2013). MOOC completion rates: The data. Retrieved on 20.01.2015, from: http://www.katyjordan.com/MOOCproject.
  16. Kaplan, F., & Bornet, C. A. M. (2014). A Preparatory Analysis of Peer-Grading for a Digital Humanities MOOC. In Digital Humanities 2014: Book of Abstracts (No. EPFL-CONF-200911, pp. 227-229).
  17. Kulkarni, C., Wei, K. P., Le, H., Chia, D., Papadopoulos, K., Cheng, J., Koller, D., & Klemmer, S. R. (2013). Peer and self assessment in massive online classes. ACM Transactions on Computer-Human Interaction (TOCHI), 20(6), 33.
  18. Luo, H., Robinson, A. C., & Park, J. Y. (2014). Peer Grading in a MOOC: Reliability, Validity, and Perceived Effects. Online Learning: Official Journal of the Online Learning Consortium, 18(2).
  19. McGarr, O., & Clifford, A. M. (2013). 'Just enough to make you take it seriously': exploring students' attitudes towards peer assessment. Higher Education, 65(6), 677-693.
  20. McMullan, M., Endacott, R., Gray, M. A., Jasper, M., Miller, C. M., Scholes, J., & Webb, C. (2003). Portfolios and assessment of competence: a review of the literature. Journal of advanced nursing, 41(3), 283-294.
  21. Nielsen, J. (1994). Usability inspection methods. In Conference companion on Human factors in computing systems (pp. 413-414). ACM.
  22. Nonaka, I., & Takeuchi, H. (1995). The knowledgecreating company: How Japanese companies create the dynamics of innovation. Oxford university press.
  23. Ostashewski, N., & Reid, D. (2012). Delivering a MOOC using a social networking site: the SMOOC Design model. In Proc. IADIS International Conference on Internet Technologies & Society, (2012), 217-220.
  24. O'Toole, R. (2013) Pedagogical strategies and technologies for peer assessment in Massively Open Online Courses (MOOCs). Discussion Paper. University of Warwick, Coventry, UK: University of Warwick. Retrieved from: http://wrap.warwick.ac.uk/54602/
  25. Piech, C., Huang, J., Chen, Z., Do, C., Ng, A., & Koller, D. (2013). Tuned models of peer assessment in MOOCs. arXiv preprint arXiv:1307.2579.
  26. Prümper, J. (1997). Der Benutzungsfragebogen ISONORM 9241/10: Ergebnisse zur Reliabilität und Validität. In Software-Ergonomie'97 (pp. 253-262). Vieweg+ Teubner Verlag.
  27. Sánchez-Vera, M. M., & Prendes-Espinosa, M. P. (2015). Beyond objective testing and peer assessment: alternative ways of assessment in MOOCs. RUSC., 12(1). pp. 119-130.
  28. Sandeen, C. (2013). Assessment's place in the new MOOC world. Research & Practice in Assessment, 8 (1), 5-12.
  29. Sitthiworachart, J., & Joy, M. (2004). Effective peer assessment for learning computer programming. In ACM SIGCSE Bulletin (Vol. 36, No. 3, pp. 122-126). ACM.
  30. Suen, H. K. (2014). Peer assessment for massive open online courses (MOOCs). The International Review of Research in Open and Distance Learning, 15(3).
  31. Topping, K. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68(3), 249-276.
  32. Wolf, K., & Stevens, E. (2007). The role of rubrics in advancing and assessing student learning. The Journal of Effective Teaching, 7(1), 3-14.
  33. Yin, S., & Kawachi, P. (2013). Improving open access through prior learning assessment. Open Praxis, 5(1), 59-65.
  34. Yorke, M. (2007). Assessment, especially in the first year of higher education: Old principles in new wrapping. In REAP International Online Conference on Assessment Design for Learner Responsibility.
  35. Yousef, A. M. F., Chatti, M. A., Ahmad, I., Schroeder, U., & Wosnitza, M. (2015a, accepted). An Evaluation of Learning Analytics in a Blended MOOC Environment. The European MOOC Stakeholder Summit 2015.
  36. Yousef, A. M. F., Chatti, M. A., Wosnitza, M., & Schroeder, U. (2015b). A Cluster Analysis of MOOC Stakeholder Perspectives. RUSC. Universities and Knowledge Society Journal, 12(1), 74-90.
  37. Yousef, A. M. F., Chatti, M. A., Schroeder, U. & Wosnitza, M. (2015c, in press). A Usability Evaluation of a Blended MOOC Environment: An Experimental Case Study. The International Review of Research in Open and Distributed Learning.
  38. Yousef, A. M. F., Chatti, M. A., Schroeder, U., Wosnitza, M., Jakobs, H. (2014a). MOOCs - A Review of the State-of-the-Art. In Proc. CSEDU 2014 conference, Vol. 3, pp. 9-20. INSTICC, 2014.
  39. Yousef, A. M. F., Chatti, M. A., Schroeder, U., Wosnitza, M. (2014b). What Drives a Successful MOOC? An Empirical Examination of Criteria to Assure Design Quality of MOOCs. In Proc. ICALT 2014, 14th IEEE International Conference on Advanced Learning Technologies, 44-48.
Download


Paper Citation


in Harvard Style

Yousef A., Wahid U., Chatti M., Schroeder U. and Wosnitza M. (2015). The Effect of Peer Assessment Rubrics on Learners' Satisfaction and Performance Within a Blended MOOC Environment . In Proceedings of the 7th International Conference on Computer Supported Education - Volume 2: CSEDU, ISBN 978-989-758-108-3, pages 148-159. DOI: 10.5220/0005495501480159


in Bibtex Style

@conference{csedu15,
author={Ahmed Mohamed Fahmy Yousef and Usman Wahid and Mohamed Amine Chatti and Ulrik Schroeder and Marold Wosnitza},
title={The Effect of Peer Assessment Rubrics on Learners' Satisfaction and Performance Within a Blended MOOC Environment},
booktitle={Proceedings of the 7th International Conference on Computer Supported Education - Volume 2: CSEDU,},
year={2015},
pages={148-159},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005495501480159},
isbn={978-989-758-108-3},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 7th International Conference on Computer Supported Education - Volume 2: CSEDU,
TI - The Effect of Peer Assessment Rubrics on Learners' Satisfaction and Performance Within a Blended MOOC Environment
SN - 978-989-758-108-3
AU - Yousef A.
AU - Wahid U.
AU - Chatti M.
AU - Schroeder U.
AU - Wosnitza M.
PY - 2015
SP - 148
EP - 159
DO - 10.5220/0005495501480159