The Effect of Peer Assessment Rubrics on Learners' Satisfaction and
Performance Within a Blended MOOC Environment
Ahmed Mohamed Fahmy Yousef
1,2
, Usman Wahid
2
, Mohamed Amine Chatti
1,2
,
Ulrik Schroeder
1,2
and
Marold Wosnitza
3
1
Learning Technologies Group (Informatik 9), RWTH Aachen University, Ahornstrasse 55, Aachen, Germany
2
Center for Innovative Learning Technologies (CiL), RWTH Aachen University, Ahornstrasse 55, Aachen, Germany
3
School Pedagogy and Educational Research, RWTH Aachen University, Eilfschornsteinstraße 7, Aachen, Germany
Keywords: Massive Open Online Courses, MOOCs, Blended MOOCs, bMOOCs, Peer Assessment, Collaborative
Learning, Rubrics.
Abstract: Massive Open Online Courses (MOOCs) have a remarkable ability to expand access to a large scale of
participants worldwide, beyond the formality of the higher education systems. MOOCs support participants
to be actively involved in collaborative learning and construct their own learning experience in a variety of
domains. However, one of the biggest challenges facing MOOCs is how to assess the learners’ performance
in a massive learning environment beyond traditional automated assessment methods. To address this
challenge, peer assessment has been proposed as an effective assessment method in MOOCs. The problem
is, however, how to ensure the quality of the peer assessment in terms of validity and reliability. Moreover,
assessment in blended MOOCs (bMOOCs) introduces unique challenges regarding the best peer assessment
model in a learning environment that brings together face-to-face interactions and online activities. This
paper presents the details of a study conducted to investigate peer assessment in bMOOCs. The study results
show that flexible rubrics have the potential to make the feedback process more accurate, credible,
transparent, valid, and reliable, thus ensuring the quality of the peer assessment task.
1 INTRODUCTION
Massive Open Online Courses (MOOCs) have
succeeded in offering large amount of university
level courses for a huge number of participants
around the globe without any entry requirements or
tuition fees, regardless of their location, age, income,
ideology, and education background (Yousef et al.,
2014a). Different types of MOOCs have been
introduced in the MOOC literature. Daniel (2012)
and Siemens (2013) classified MOOCs into
connectivist MOOCs (cMOOCs) and extension
MOOCs (xMOOCs). The vision behind cMOOC is
based on the theory of connectivism, which fosters
connections, collaborations, and knowledge sharing
among course participants. The second type,
xMOOCs is following virtue of behaviorism and
cognitivist theories with some social constructivism
aspects. xMOOC platforms were developed by
different elite universities and usually distributed
through a third party provider such as Coursera,
edX, and Udacity.
Despite their popularity and the large scale
participation, a variety of concerns and criticism in
the use of MOOCs have been raised. Yousef et al.
(2014a) in their comprehensive analysis of the
MOOC literature reported that the major limitation
in MOOCs is the lack of human interaction (i.e.
face-to-face communication). Furthermore, the
authors pointed out that the original concept of
MOOCs that aims at breaking down the barriers of
education for anyone, anywhere, and at any time, is
far away from the reality. In fact, most of the
existing (x)MOOC implementations still follow a
centralized and controlled top-down, teacher-
centered learning model. Initiatives to implement
student-centered, open, bottom up, and distributed
forms of MOOCs are the exception rather than the
rule. Other researchers point out concerns about the
limitations of MOOCs. These concerns include
pedagogical problems concerning providing the
participants with timely, accurate, and meaningful
feedback of their assignments tasks (Hill, 2013;
Piech et al., 2013; Luo et al., 2014); lack of
148
Yousef A., Wahid U., Chatti M., Schroeder U. and Wosnitza M..
The Effect of Peer Assessment Rubrics on Learners’ Satisfaction and Performance Within a Blended MOOC Environment.
DOI: 10.5220/0005495501480159
In Proceedings of the 7th International Conference on Computer Supported Education (CSEDU-2015), pages 148-159
ISBN: 978-989-758-108-3
Copyright
c
2015 SCITEPRESS (Science and Technology Publications, Lda.)
interactivity between learners and the video content
(Grünewald et al., 2013); high drop-out rates, on
average 95%, of course participants (Daniel, 2012).
Plausible reason for the latter problem might be the
complexity and diversity of the participants. This
diversity is not only related to cultural and
demographic attributes, but also takes into account
individual motives and perspectives when enrolled
in MOOCs (Yousef et al., 2015b).
In order to address these limitations, a new
design paradigm emerges, called blended MOOCs
(bMOOCs). This paradigm aims to bring together
in-class (i.e. face-to-face) interactions and online
learning components as a blended environment. This
blended model can resolve some of the hurdles
facing standalone MOOCs (Ostashewski & Reid,
2012; Bruff, et al., 2013). The bMOOCs model has
the potential to bring human interactions into the
MOOC environment, foster student-centered
learning, support the interactive design of the video
lectures, provide effective assessment and feedback,
as well as contemplate the diverse perspectives of
the MOOC participants.
However, the ability to evaluate a large scale of
participants in MOOCs is obviously a big challenge
(Yin and Kawachi, 2013). The most widely used
evaluation technique in MOOCs is regular
automated assessment, which is restricted to closed
question formats, e.g. quizzes with multiple choice
questions (Díez et al., 2013; Kaplan & Bornet,
2014). This method of assessment is relatively easy
to apply in science curricula courses, even though
the level of competences to be examined is rather
limited. It seems even more difficult to apply this
assessment method in humanities curricula courses,
mainly due the nature of these courses, which are
based on the creativity and imagination of the
learners (Sandeen, 2013). This provides strong
ground for alternative assessment methods for both
domains that provide effective and constructive
feedback to MOOCs participants about their open-
ended exercises, or essays.
The generic aim of most assessment methods is
to provide such kind of feedback usually involve
teaching staff correcting and grading the
assignments. In the MOOCs scenarios, this requires
substantial resources in terms of time, money, and
manpower. To alleviate this problem, we argue that
the most suitable way is to look for assessment
methods that employ the wisdom of the crowd. Such
assessment methods include portfolios, wrappers,
self-assessment, group feedback, and peer
assessment (Chatti et al., 2014; Davis et al., 2014).
Learner’s portfolio is an approach to authentic
assessment that potentially enables large classes to
reflect on their work (McMullan, 2003); wrapping
assessment techniques use a set of reflective
questions to engage participants in self-assessment
and self-directed learning (Yorke, 2007); self-
assessment can be used to prompt learners’
reflection on their own learning outcomes; and peer
assessment refers crowdsourcing grading activities
where learners can take responsibility for rating,
evaluating, and providing feedback on each other’s
work (Topping, 1998).
We considered these different crowdsourcing
assessment activities, and concluded that the most
suitable assessment method in our scenario is to
involve the learners themselves under supervision
and guidance from the teachers. We think that peer
assessment activities that involve learners
themselves in the assessment process can play a
crucial role in supporting an effective MOOC
experience. So far, little research has been carried
out to investigate the effectiveness of using peer
assessment in a bMOOC context (Chatti et al., 2014;
Suen, 2014). In an attempt to handle this assessment
issue, this paper presents in details a study
conducted to investigate the effectiveness of using
peer assessment on learners’ performance and
satisfaction in the bMOOC environment L
2
P-
bMOOC.
2 L
2
P-BMOOC:
FIRST DESIGN
As highlighted earlier, current MOOCs suffer from
several critical limitations, among which are the
focus on the traditional teacher-centered model, the
lack of human interaction, as well as the lack of
interaction between learners and the video content
(Grünewald et al., 2013; Yousef et al., 2015b).
L
2
P-bMOOC is an extension of the L
2
P learning
platform of RWTH Aachen University, Germany. It
was designed and implemented to address these
limitations. L
2
P-bMOOC supports learner-centered
bMOOCs by providing a bMOOC environment
where learners can take an active role in the
management of their learning activities, thus
harnessing the potential of bMOOCs to support self-
organized learning. L
2
P-bMOOC fosters human
interaction through face to face communication and
scaffolding, driven by blended learning approach.
The platform includes a video annotation tool that
enables learners’ collaboration and interaction
TheEffectofPeerAssessmentRubricsonLearners'SatisfactionandPerformanceWithinaBlendedMOOCEnvironment
149
around a video lecture to engage the learners and
increase interaction between them and the video
content. Thus, L
2
P-bMOOC changes the traditional
MOOC concept, where learners are limited to
viewing video content towards a collaborative and
dynamic one. Learners are encouraged to organize
their learning, collaborate with each other, create
and share their knowledge with others.
In L
2
P-bMOOC, video lectures are
collaboratively structured and annotated in a mind-
map representation. Figure 1 shows the workspace
of L
2
P-bMOOC which consists of a course selection
section, an unbound canvas representing the video
map structure of the lecture, and a sidebar for new
video node addition and editing of video properties.
Possible actions on a video node include video
annotations, video clipping, social bookmarking (i.e.
attaching external web feeds), and collaborative
discussion threads (Yousef et al., 2015c).
Figure 1: L
2
P-bMOOC Workspace.
As pilot test for this platform the course
“Teaching Methodologies” was delivered as
bMOOC by the Fayoum University, Egypt in
cooperation with RWTH Aachen University. It
started in March 2014 and ran for eight weeks. This
course was offered both formally to students from
Fayoum University and informally with open
enrollment to anybody who was interested in
teaching and learning methodologies. At the end of
the course, there were 128 active participants. 93
were formal participants who took the course to earn
credits from Fayoum University. These participants
were required to complete it and obtain positive
grading of assignments. The rest were informal
participants undertaking the learning activities at
their own pace without receiving any credits. The
teaching staff provided six video lectures and the
course participants have added 27 related videos.
The course was taught in English and the
participants were encouraged to self-organize their
learning environments, to present their own ideas,
collaboratively create video maps of the lectures,
and share their newly-acquired knowledge through
social bookmarking, annotations, forums, and
discussion threads (Yousef et al., 2015c).
To evaluate whether the platform supports and
achieves the goals of “network learning” and “self-
organized learning”, we designed a qualitative study
based on a questionnaire. This questionnaire utilized
a 5-point Likert scale with range from (1) strongly
disagree, to (5) strongly agree. We derived the
results and reported conclusions based on the 50
participants who completed and submitted the
questionnaire by the end of the survey period. The
results obtained from this preliminary analysis are
summarized in the following points:
The collaboration and communication tools (i.e.
group workspaces, discussion forums, live chat,
social bookmarking, and collaborative annotations)
allowed the course participants to discuss, share,
exchange, and collaborate on knowledge
construction, as well as, receive feedback and
support from peers.
The results further show that the majority agreed
that L
2
P-bMOOC allowed them to be self-organized
in their learning process. In particular, the
participants reported that it helped them to learn
independently from teachers and encouraged them to
work at their own pace to achieve their learning
goals.
The study, however, identified two problems
concerning assessment and feedback. The
participants had some difficulties in tracking and
monitoring their learning activities and those of their
peers. The second issue pointed out was the limited
ability to evaluate and give effective feedback for
their open-ended exercises (Yousef et al., 2015c).
A possible solution for the first problem was the
introduction of learning analytics features. These
features can improve the participants’ learning
experience through e.g. the monitoring of their
progress and supporting (self)-reflection on their
learning activities. To alleviate the second problem,
we opted for peer assessment. As motivated in the
previous section, one possible scenario for peer
assessment is the evaluation of assignment that
cannot be corrected automatically, such as open-
ended exercises and essays.
In August 2014, we conducted a second case
study to evaluate the usability and effectiveness of
the learning analytics module. The focus of this
study was to examine to which extent this module
supported personalization, awareness, self-
reflection, monitoring, and recommendation in
bMOOCs (Yousef et al., 2015a). What still remained
unclear is how to leverage peer assessment in
CSEDU2015-7thInternationalConferenceonComputerSupportedEducation
150
bMOOCs. In this paper, we investigate the
application of peer assessment in bMOOCs. We
address the following research questions:
Does the peer assessment module improve
learning outcomes?
Does the peer assessment module provide a
reliable and valid feedback for participants?
Which peer assessment model fits best in a
bMOOC context?
What is the learners’ perception of satisfaction
with the usability of the peer assessment module
in L
2
P-bMOOC?
3 PEER ASSESSMENT IN MOOC
Assessment and feedback are essential part of the
learning process in MOOCs. Collecting valid and
reliable data to grade learners’ assignments;
identifying learning difficulties and taking action
accordingly; and using these results, are just a
portion of the measures to improve the academic
experience (Kulkarni et al., 2013). Many MOOCs
use automated assessments (e.g. quizzes with closed
questions such as multiple-choice/multiple-response)
which strongly focus on the cognitive aspects of
learning. The key challenge of automated grading in
MOOCs is the inability to capture the semantic
meaning of learners’ answers; in particular on open-
ended questions (Kulkarni et al., 2013).
On the other hand, peer assessment is a
promising alternative evaluation strategy in
MOOCs, where learners can be actively involved in
the assessment processes (O’Toole, 2013). This
method of assessment is suitable for activities, like
exercises, assignments, or exams which do not have
clear right or wrong answers especially in
humanities, social sciences, and business studies
(O’Toole, 2013). Several studies have been
conducted to investigate the pedagogical impact of
using peer assessment in traditional classroom
instruction, and acknowledged a number of distinct
advantages. These include: increase in learners’
responsibility and autonomy, new learning
opportunities for both sides (i.e. givers and receivers
of work review), enhanced collaborative learning
experience, and strive for a deeper understanding of
the learning content (Topping, 1998).
Unfortunately, so far, there has been little
discussion about using peer assessment in MOOCs.
In the next section, we will discuss specifically how
MOOCs providers are using peer assessment in their
courses.
3.1 Coursera
Coursera has integrated a peer assessment system in
its learning platform to evaluate and provide
feedback for at least 3 to 4 assignments. Coursera
provides learners with an optional evaluation matrix
to improve peer assessment results. In addition,
learners have the opportunity to self-evaluate
themselves (Piech et al., 2013; Luo et al., 2014). The
peer assessment system in Coursera involves three
main phases: 1) submission phase, 2) evaluation
phase, and 3) publishing results (Coursera, 2015).
Until recently, there has been no reliable evidence
on how peer assessment affects the learning
experience in Coursera.
In several MOOCs offered by the Pennsylvania
State University and hosted online by Coursera,
learners reported that, they mistrusted the peer
assessment results. Moreover, they outlined some
issues of peer assessment, such as the lack of peers’
feedback, accuracy, and credibility (Suen, 2014).
3.2 edX
Peer assessment in edX work similar to the ones in
Coursera. Here, learners are required to review a few
assignments samples that have already been graded
by the professor before evaluating their peers. After
learners proved that they can assign grades similar to
those given by the professor, they are permitted to
evaluate each other’s work and provide feedback,
using the same rubric (edX, 2015).
3.3 Peer Assessment Issues in MOOCs
Peer assessment is a valuable evaluation method for
learners to receive deeper feedback on their
assignments but it is not always as effective as
expected in MOOCs scenarios (Suen, 2014). Jordan
(2013) shows that MOOCs which used peer
assessments tend to have lower course completion
rates compared to the ones that used automated
assessment. In general, there are several possible
factors that can explain the lack of effectiveness of
peer assessment in MOOCs:
The issue of scale (Suen, 2014).
The diversity of reviewers’ background and
prior experience (Yousef et al., 2015b).
The lack of accuracy and credibility of peer
feedback (Suen, 2014).
The lack of transparency of the review process.
MOOCs participants do not trust the validity
and reliability of peer assessment results due to
TheEffectofPeerAssessmentRubricsonLearners'SatisfactionandPerformanceWithinaBlendedMOOCEnvironment
151
the absence of a clear evaluation authority (e.g.
teacher)
The low perceived expertise (McGarr &
Clifford, 2013).
Peer assessment in MOOCs employs fixed
grading rubrics. Obviously, different exercise
types require different assessment rubrics
(Sánchez-Vera & Prendes-Espinosa, 2015).
4 PEER ASSESSMENT
IN L
2
P-BMOOC
In this study, we focus on the application of peer
assessment from a learner’s perspective to support
self-organized and network learning in bMOOCs
through peer assessment rubrics. In the following
sections, we discuss the design, implementation, and
evaluation of the new peer assessment module in
L
2
P-bMOOC.
4.1 Requirements
In order to enhance L
2
P-bMOOC with a peer
assessment module, we collected a set of
requirements from recent peer assessment and
MOOCs literature (Gielen et al., 2010; Suen, 2014;
Yousef et al., 2014a). Then, we designed a survey to
collect feedback from different MOOC stakeholders
concerning the importance of the collected
requirements. The demographic profile of this
survey was distinguished into professors and
learners as follows:
Professors: 98 professors who had taught a
MOOC completed this survey. 41% from
Europe, 42% from the US and 17% from Asia.
Learners: 107 learners participated in the
survey. A slight majority of these learners were
males (56%). The learners’ ages ranged from 18
to 40+, with almost 65% between the ages of 18
and 39. 12% High school and other levels of
studying, 36% were studying Bachelor, 40%
Master’s, 12% PhD. All of them had taken one
or more online courses, and 92% had
participated in MOOCs. These learners came
from 41 different countries and cultural
backgrounds in Europe, US, Australia, Asia,
and Africa.
A summary of the survey analysis results are
presented in Table 1. The agreeability means of peer
assessment requirements is quite high at above 4. In
particular, indicators 3 and 5 call for specific, albeit
flexible guidelines and rubrics. This is important to
avoid grading without reading the work, or not
following a clear grading scheme, which negatively
impacts the quality of the given feedback (Yousef et
al., 2014b).
Table 1: L
2
P-bMOOC Peer Assessment Requirements
(N=205).
No
L
2
P-BMOOC Peer Assessment Requirements
Items M SD
1
Students should receive feedback
and/or correct answers to each
assignment task.
4.57 0.90
2
Provide formative assessment and
feedback within the learning
process.
4.12 1.05
3
Design flexible guidelines and
rubrics for each task.
4.53 0.84
4
Give clear directions and time
limits for in-class peer review
sessions (i.e., face-to-face
interaction) and set defined
deadlines for out-of-class peer
review assignments.
4.36 1.06
5
Each student doing the peer
review should explain his or her
evaluation.
4.32 0.79
1. Strongly disagree … 5. Strongly agree
Based on the peer assessment literature review
and the survey results, we derived a set of
requirements to support peer assessment in L
2
P-
bMOOC, as summarized below:
User Interface: The interface should be simple,
understandable, and easy to use while requiring
minimal user input. The interface design of the
module should take usability principles into
account, and go through a participatory design
process (Nielsen, 1994).
Rubrics: Provide learners with flexible task-
specific rubrics that include descriptions of each
assessment item to achieve fair and consistent
feedback for all course participants.
Management: Peer assessment should be easy
to manage. The module ought to be integrated
into the platform with features for activation
and deactivation.
Scalability: The fundamental difference
between MOOCs and traditional classroom is
the scale of learners. Consequently, scalability
should be considered in the implementations of
peer assessment module in L
2
P-bMOOC.
Collaborative Review: Provide mechanisms for
a collaborative review process which involves
the input of more than one individual
participant.
CSEDU2015-7thInternationalConferenceonComputerSupportedEducation
152
Double Blind Process: Peer assessment module
should support the double blind review process.
Neither the assignment authors know the
reviewers identities, vice versa.
Deadlines: Peer assessment module should
provide two deadlines for each task: the
submission deadline for learners to submit their
work, and the other for the peer grading phase.
5 IMPLEMENTATION
The peer assessment module in L
2
P-bMOOC
consists of the six components as shown in Figure 2.
Figure 2: Peer assessment workflow.
These peer assessment components are classified
according to the following methods:
Teachers need methods to define assignment
tasks and manage the review process.
Learners need methods to see assignment tasks
and submit solutions, as well as, to provide and
receive peer reviews.
Microsoft SharePoint 2013 has been used as the
underlying technology of the L²P platform.
SharePoint offers a solid base for MOOCs
development, while offering a wide range of other
advantages. These include scalability, security,
customization and collaboration. The internal list
structure of SharePoint makes it easy to implement
fine grained rights on individual list items, which
allow for easy to use rights management in L
2
P-
bMOOCs peer assessment module. Basically, it is
easy to configure who can see what on a given point
in time. Also, workflows can be used to organize
submission and evaluation processes.
5.1 Teacher Perspective
The peer assessment module in L
2
P-bMOOC
consists of a centralized place of actions (navigation
ribbon) to help teachers to define, manage, and
navigate the assignment tasks, as shown in Figure 3.
Figure 3: Teacher Navigation Ribbon.
The ribbon actions provide a complete set of
tools to define peer assessment tasks, manage task-
specific rubrics, assign reviewers, give final grades,
and publish the results.
5.1.1 Task Definition with Rubrics
The task definition begins with defining some basic
attributes of the assignments. These attributes
include the name and description, the deadlines, and
the associated materials and resources. Additionally,
there are a number of specific settings to be
configured, which are related to the peer assessment
itself. These specific settings are concerning the start
and end of the review, the review impact on the final
grade, and the task-specific rubrics (see Figure 4).
Figure 4: Task Definition with Rubrics.
There are well researched and documented
methods to enhance the effectiveness of peer
assessment by asking direct questions for the peer to
answer, in order to assess the quality of work by the
author (Gielen et al., 2010). This way, the reviewer
can easily reflect on the quality of work in a goal-
oriented manner. Hence, we implemented a rubric
system that allows tutors to define specific questions
related to each task, and also reuse pre-defined
rubrics. The process for defining rubrics is included
in the task definition itself. A typical rubric has two
Task Definition with
Rubrics
Submit
Solution
Assign
Reviewers
Peer Assessment
Individual / Group
Publish Review
Results
TheEffectofPeerAssessmentRubricsonLearners'SatisfactionandPerformanceWithinaBlendedMOOCEnvironment
153
attributes: name and the actual rubric question.
Further, it contains descriptions that define the
learning outcome and performance levels to provide
enough information to guide learners in doing the
peer assessment review. Teachers can select multiple
rubrics to associate with an assignment definition.
Once the assignment task has been defined, an
automated workflow takes care of publishing the
assignment at the specified time along with
submission deadline. Meanwhile, another workflow
takes care of the review submission after the review
start date.
5.1.2 Assigning Reviewers
Course teachers can assign solutions submitted by
learners to different peers for reviewing by selecting
from a list (see figure 5).
Figure 5: Assigning Reviewers.
Future versions of the system should automate
the distribution process. There are mechanisms to
reverse the process, if there is a problem or a
mistake. After this, the assigned reviews are visible
to the learners according to the specified dates, and
if any review assignment is made after the review
start date, it would be shown to the learners directly.
5.1.3 Publishing Reviews
After grading all the solutions, teachers can publish
the review results to the learners at once using an
action from the ribbon. As a result, the learners are
able to see the reviews submitted by their peers.
5.2 Learner Perspective
The navigation ribbon contains actions for learners
to submit solutions and perform peer review task.
5.2.1 Submitting Solutions
Once the assignment has been published, the
learners can see the details of the assignment and
work on their solutions until the proposed deadline.
Learners can add a solution by adding a description
and uploading their documents and resources
relevant to the solution. Learners can work
individually, or in groups, depending on the
assignment’s requirements (see Figure 6).
Figure 6: Submitting Solutions.
5.2.2 Peer Assessment
There are a number of peer assessment
methodologies dealing with the anonymity of author
and reviewer, e.g. Single Blind Review (reviewer is
anonymous, author is known), Double Blind Review
(both reviewer and author are anonymous) and lastly
the Open Review (No anonymity). For the purpose
of this implementation we decided to use the Double
Blind Review, as it reduces the chances of biased
marking (Sitthiworachart & Joy, 2004).
Figure 7: Peer Assessment Interface.
CSEDU2015-7thInternationalConferenceonComputerSupportedEducation
154
Once the peer review phase starts, the learners can
see a list of reviews assigned to them by the
teachers. The interface for adding a review can be
seen in Figure 7. It contains two sections, the
submitted solution on the top and the review section
with rubrics at the bottom. The reviewers can see the
documents and resources attached to the solution
and any comments given by the authors. They can
add their comments against the rubric questions in
the review section along with an option to upload
any files and grade the review as well.
6 CASE STUDY
In October 2014, we conducted a third case study to
investigate the usability and effectiveness of the peer
assessment module. We used the enhanced edition
of L
2
P-bMOOC to offer a bMOOC on “Education
and the Issues of the Age” at Fayoum University,
Egypt in cooperation with RWTH Aachen
University. Again, the course was offered both
formally to students from Fayoum University and
informally with open enrollment to anyone who is
interested in teaching and educations issues. The
teaching staff is composed of one professor and one
assistant researcher from Fayoum University as well
as one assistant researcher from RWTH Aachen
University. A total of 133 participants completed
this course. 92 formal participants took the course to
earn credits from Fayoum University. These
participants were required to complete the course
and obtain positive grading of assignments. The
remaining 41 were informal participants who didn’t
attend the face-to-face sessions. They have
undertaken the learning activities at their own pace
without receiving any type of academic credits. The
teaching staff provided nine short video lectures and
the course participants added another 25 related
videos. Participants in the course were encouraged
to use video maps to organize their lectures, and
collaboratively create and share knowledge through
annotations, comments, discussion threads, and
bookmarks. Participants used the peer assessment
module for the submission of a team project report.
After the submission, every team reviewed other’s
work and provided their feedback based on the
rubric questions provided by the teaching staff.
These reviews were then taken into consideration by
the teaching staff while compiling their own
feedback of the team projects. Once the teacher
reviews were completed the final corrections were
made public to the students who could see both
reviews for their own project namely, the review
from peer and the review from the teacher.
7 EVALUATION
We conducted a thorough evaluation of the peer
assessment module in L
2
P-bMOOC in order to
answer the main research questions in this work. The
aim was to evaluate the usability and effectiveness
of the module, including the impact on learning
outcome and the quality of feedback. Our endeavor
was also to investigate which peer assessment model
fits best in a bMOOC context. We employed an
evaluation approach based on the ISONORM
9241/110-S as a general usability evaluation as well
as a custom questionnaire to measure the
effectiveness of peer assessment in L
2
P-bMOOC.
7.1 Usability Evaluation
The purpose of usability evaluation is to measure
learner’s satisfaction with the peer assessment
module as well as to identify the issues for
improvement. The ISONORM 9241/110-S
questionnaire was designed based upon the
International Standard ISO 9241, Part 110 (Prümper,
1997). We used this questionnaire as a general
usability evaluation for the peer assessment module.
It consists of 21 questions classified into seven main
categories. Participants were asked to respond to
each question scaling from (7) a positive
exclamation and its mirroring negative counterpart
(1). The questionnaire comes with an evaluation
framework that computes several aspects of usability
to a single score between 21 and 147. A total of 57
out of 133 participants completed the questionnaire.
A diversity in learner’s age was exhibited by the
evaluators, their ages ranging from 18 to 40+ years
with almost 65% of the evaluators being between the
ages of 18 and 24. Around 70% of the evaluators
were Bachelors students, 17% from Masters courses
and the remaining 12% pursuing a PhD. All of them
had taken one or more online courses. The results
obtained from the ISONORM 9241/110-S usability
evaluations are summarized in Table 2.
The overall score was 99.1 which translates to
“Everything is all right! Currently there is no reason
to make changes to the software in regards of
usability” (Prümper, 1997). This result reflects a
high level of user satisfaction with the usability of
peer assessment module in L
2
P-bMOOC.
TheEffectofPeerAssessmentRubricsonLearners'SatisfactionandPerformanceWithinaBlendedMOOCEnvironment
155
Table 2: ISONORM 9241/110-S Evaluation Matrix (N=
57).
Factor Aspect M
Su
m
Suitability for
tasks
Integrity 5.2
15
Streamlining 5.5
Fitting 4.3
Self-
descriptiveness
Information content 4.9
14.5
Potential support 4.8
Automatic support 4.9
Conformity with
user expectations
Layout conformity 4.7
14
Transparency 4.7
Operation
conformity
4.6
Suitability for
learning
Learnability 5.4
14.7
Visibility 4.8
Deducibility 4.5
Controllability
Flexibility 4.9
14.2
Changeability 4.5
Continuity 4.8
Error tolerance
Comprehensibility 4.7
13.5
Correct ability 4.6
Correction support 4.2
Suitability for
individualization
Extensibility 4.0
13.2 Personalization 4.3
Flexibility 4.9
ISONORM score 99.1
7.2 Effectiveness Evaluation
In our study, we focused on peer assessment to
support groups or individuals to review, grade and
provide in-depth feedback for their peers, based on
flexible rubrics. The effectiveness evaluation aims at
investigating the impact on learning outcomes and
the quality of feedback. This study included the
design of a questionnaire adapted from (Brindley &
Scoffield, 1998; Wolf & Stevens, 2007; Kulkarni et
al., 2013). The questionnaire consisted of two main
parts. The first part containing 21 items in the two
categories mentioned above as illustrated in Table 3.
The second part aimed at investigating the most
effective peer assessment model in a bMOOC
setting, as presented in Table 4. To ensure the
relevance of these questions, a pre-test was
conducted with 5 learners and 5 learning
technologies experts. Their feedback included a
refinement of some questions and replacing some
others. The revised questionnaire was then given to
the “Education and the Issues of the Age” course
participants.
Table 3: The Effectiveness Evaluation of Peer Assessment
in L
2
P-bMOOC (N= 57).
No
Peer Assessment
Evaluation Items M SD
Impact on learning outcome
1
The peer feedback helped me to see
errors in my own work.
4.5 0.50
2
Reviewing others' work helped me to
reflect on my own work.
4.4 0.53
3
The received feedback helped me to
reflect on my own work.
4.2 0.51
4
The peer assessment helped me to
learn how to give constructive
feedback to peers.
4.2 0.62
5
The peer feedback helped me to come
up with new ideas.
4.4 0.53
6
The comments I received from peer
feedback helped to improve the
quality of my work.
4.3 0.48
7
The received feedback helped me to
get more information about the
learning topic.
4.4 0.53
8
Reviewing others' work helped me to
expand knowledge about the learning
topic.
4.3 0.51
9
The peer assessment increased my
ability in organizing ideas and
contents in my work.
4.1 0.50
Impact on learning outcome average 4.3 0.52
Quality of feedback
10
The scoring grade I received from peer
feedback was valid.
4.2 0.51
11
The peer feedback I received is
accurate and credible.
4.2 0.50
12
I am confident that my peers have
enough ability to assess my work.
4.2 0.53
13
I am confident that I have the ability to
assess peers’ work.
4.3 0.71
14
I put sufficient effort into grading
peers’ work.
4.5 0.56
15
The peer assessment rubrics and their
descriptions were sufficiently clear.
4.3 0.57
16
The peer assessment rubrics supported
in providing peers with detailed
feedback on their assignment work.
4.4 0.62
17
The peer assessment rubrics assisted
me in focusing on particular details in
the peers work.
4.4 0.53
18
The description of the rubrics helped
me understand what teachers expected
in the evaluation report.
4.4 0.54
19
The peer assessment rubrics made the
review task clearer.
4.4 0.56
20
The peer assessment rubrics made the
review process more transparent.
4.3 0.54
21
The peer assessment rubrics were
necessary to complete my review task.
4.4 0.53
Quality of feedback average 4.3 0.55
1. Strongly disagree … 5. Strongly agree
CSEDU2015-7thInternationalConferenceonComputerSupportedEducation
156
7.2.1 Impact on Learning Outcome
Respondents were asked to indicate whether the peer
assessment has affected their learning outcome. As
can be seen from Table 3, the overall response to the
evaluation items 1-9 was very positive at 4.3 with
acceptable standard deviation at 0.52. This indicates
that peer assessment is a powerful evaluation
method to detect and correct errors, reflect, and
criticize which are key elements in double-loop
learning. The concept of double-loop learning was
introduced by Argyris and Schön (1978) within an
organizational learning context. According to the
authors, learning is the process of detecting and
correcting errors. Error correction happens through a
continuous process of inquiry, reflection, and (self-)
criticism, which enables learners to test, challenge,
and eventually update their knowledge, and in so
doing improving their learning outcome (Chatti et
al., 2012).
Peer assessment further fosters continuous
knowledge creation, which is a prerequisite for
effective learning (Nonaka and Takeuchi, 1995).
This can be attributed to the fact that in the peer
assessment process, learners can learn from either
negative or positive aspects of peer’s work and make
use of them to get in-depth understanding of the
learning topic and improve their knowledge, which
leads to an enhancement of their learning
performance.
7.2.2 Quality of Feedback
Key challenges in peer assessment include the
diversity of reviewers’ background and prior
experience (Yousef et al., 2015b), the lack of
accuracy and credibility of peer feedback (Suen,
2014) as well as the lack of transparency of the
review process. Moreover, MOOC participants do
not trust the validity and reliability of peer
assessment results due to the absence of a clear
evaluation authority (e.g. teacher) and the low
perceived expertise of students (McGarr & Clifford,
2013).
Rubrics provide a possible solution to overcome
these issues by offering clear guidelines when
assessing peer’s work. Items 10 to 21 in Table 3 are
concerned with the quality of the rubric-based peer
feedback approach employed in L
2
P-bMOOC. In
general, the respondents agreed that harnessing
rubrics had a positive impact on the quality of the
peer assessment task, in terms of the accuracy and
credibility of peer feedback (item 11), transparency
of the review process (item 20), as well as validity
and reliability of peer assessment results (item 10
and 12). Moreover, the study revealed that
participants are confident in their ability to assess
peers’ work. They confirmed that following clear
rubrics helped them understand the evaluation
criteria and supported them in providing peers with
detailed feedback.
7.3 Peer Assessment Models
An important goal in our study was also to
investigate which peer assessment model fits best in
a bMOOC context, as presented in Table 4.
Table 4: Peer Assessment Models in bMOOCs.
Peer Assessment Models Mean SD
Time
Early feedback
4.6 0.50
Delayed feedback 1.7 0.44
Anonymity
Double blind review
4.6 0.48
Single blind review 2.3 0.61
Open review 1.7 0.88
Delivery
Indirect feedback (i.e., written )
4.6 0.72
Direct feedback (i.e., face-to-face) 2.2 0.68
Peer Grading
Review with grading 3.1 0.86
Review with partly grading
4.4 0.79
Review without grading 1.9 0.41
Peer Grading Weight
Contributing to the final official grade 3.8 0.93
Not contributing to the final official
grade
2.9 1.20
Channel
Single channel feedback (1:1) 2 0.52
Multiple channel feedback (m:n)
4.8 0.34
Review Loop
Single loop 2 0.73
Multiple loop
4.8 0.34
Teacher Role
Substitution 2.1 0.57
Supplementary
4.3 0.58
Monitoring 2.9 0.87
1. Strongly disagree … 5. Strongly agree
We can draw certain conclusions about the most
effective peer assessment practices in bMOOCs as
follows:
Time: Optimal feedback should be provided early in
the assessment process in order to give learners the
opportunity to react and improve their work.
Anonymity: An important aspect of peer assessment
is to ensure the anonymity of the feedback. This
way, reviewers can provide critical feedback and
TheEffectofPeerAssessmentRubricsonLearners'SatisfactionandPerformanceWithinaBlendedMOOCEnvironment
157
grading without considering interpersonal factors
e.g. friendship bias or personal dislikes.
Delivery: Indirect feedback ensures more effective
assessment results as learners feel more comfortable
to give honest feedback without any influence from
peers.
Peer Grading: Peer grading should only be a part of
the final grade in order to ensure the validity of the
assessment results.
Channel: Assessment results can be more accurate
and credible when learners receive feedback from
multiple reviewers rather than from a single one.
This way, learners have the chance to receive a
multifaceted feedback on their work.
Review Loop: Having multiple feedback iteration
achieve a better learning outcome as learners can
reflect on the assignment work multiple times.
Teacher role: The teachers should still take an
active role in the peer assessment process, by
defining evaluation rubrics, providing sample
solutions, and checking the peer review results. They
can also help in developing review skills.
8 CONCLUSIONS
MOOCs have attracted a huge number of
participants around the globe to attend free online
courses in variety of domains. However, one of the
greatest challenges facing MOOCs is how to assess
the learners’ performance in larger class sizes
beyond traditional automated assessment methods.
Peer assessment has been proposed as an effective
assessment method in MOOCs to address this
challenge. The issue is, however, how to ensure the
quality of the peer assessment in terms of validity,
and reliability. Moreover, assessment in blended
MOOCs (bMOOCs) introduces unique challenges
regarding the best peer assessment model in a
bMOOC context.
This paper presents the details of a study
conducted to investigate peer assessment in
bMOOCs. The study results show that flexible
rubrics have the potential to make the feedback
process more accurate, credible, transparent, valid,
and reliable, thus ensuring the quality of the peer
assessment task. Furthermore, early feedback,
anonymity, indirect feedback, peer grading as only a
part of the final grade, multiple channel feedback,
multiple feedback loops, as well as a supplementary
teacher role are the most effective assessment
methods in bMOOCs.
ACKNOWLEDGEMENTS
We are grateful to Dr. Ahmed Ramadan Khatiry,
Fayoum University for providing the course
material. We also thank Vlatko Lukarov, Center for
Innovative Learning Technologies (CiL), RWTH
Aachen University for his valuable comments and
feedback on the first drafts of the paper.
REFERENCES
Argyris, C., & Schon, D. (1978). Organizational learning:
A theory of action approach. Reading, MA: Addision
Wesley.
Brindley, C., & Scoffield, S. (1998). Peer assessment in
undergraduate programmes. Teaching in higher
education, 3(1), 79-90.
Bruff, D. O., Fisher, D. H., McEwen, K. E., & Smith, B.
E. (2013). Wrapping a MOOC: Student perceptions of
an experiment in blended learning. MERLOT Journal
of Online Learning and Teaching, 9(2), 187-199.
Chatti, M. A., Jarke, M., & Schroeder, U. (2012). Double-
loop learning. Encyclopedia of the sciences of
learning, 1035-1037.
Chatti, M. A. (2010) The LaaN Theory. In:
Personalization in Technology Enhanced Learning: A
Social Software Perspective. Aachen, Germany:
Shaker Verlag, pp. 19-42.
Chatti, M. A., Lukarov, V., Thüs, H., Muslim, A., Yousef,
A. M. F., Wahid, U., Greven, C., Chakrabarti, A.,
Schroeder, U. (2014). Learning Analytics: Challenges
and Future Research Directions. eleed, Iss. 10.
Coursera. (2015) How will my grade be determined?
Retrieved on 20
th
of January, 2015 from,
http://help.coursera.org/customer/portal/articles/11633
04-how-will-my-grade-be-determined-
Daniel, J. (2012). Making sense of MOOCs: Musings in a
maze of myth, paradox and possibility. Journal of
Interactive Media in Education, 3.
Davis, H., Dikens, K., Leon-Urrutia, M., Sanchéz-Vera,
M. M., & White, S. (2014). MOOCs for Universities
and Learners an analysis of motivating factors. In
Proc. CSEDU 2014 conference, pp. 105-116.
INSTICC, 2014.
Díez, J., Luaces, O., Alonso-Betanzos, A., Troncoso, A.,
& Bahamonde, A. (2013, December). Peer assessment
in MOOCs using preference learning via matrix
factorization. In NIPS Workshop on Data Driven
Education.
edX. (2015). Open Response Assessments. Retrieved on
20
th
of January, 2015 from, http://edx-guide-for-
students.readthedocs.org/en/latest/SFD_ORA.html.
Gielen, S., Peeters, E., Dochy, F., Onghena, P., &
Struyven, K. (2010). Improving the effectiveness of
peer feedback for learning. Learning and Instruction,
20(4), 304-315.
CSEDU2015-7thInternationalConferenceonComputerSupportedEducation
158
Grünewald, F., Meinel, C., Totschnig, M., & Willems, C.
(2013). Designing MOOCs for the Support of Multiple
Learning Styles. In Scaling up Learning for Sustained
Impact (pp. 371-382). Springer Berlin Heidelberg.
Hill, P. (2013). Some validation of MOOC student
patterns graphic.
From: http://mfeldstein.com/validation-mooc-student-
patterns-graphic/
Jordan, K. (2013). MOOC completion rates: The data.
Retrieved on 20.01.2015, from:
http://www.katyjordan.com/MOOCproject.
Kaplan, F., & Bornet, C. A. M. (2014). A Preparatory
Analysis of Peer-Grading for a Digital Humanities
MOOC. In Digital Humanities 2014: Book of
Abstracts (No. EPFL-CONF-200911, pp. 227-229).
Kulkarni, C., Wei, K. P., Le, H., Chia, D., Papadopoulos,
K., Cheng, J., Koller, D., & Klemmer, S. R. (2013).
Peer and self assessment in massive online classes.
ACM Transactions on Computer-Human Interaction
(TOCHI), 20(6), 33.
Luo, H., Robinson, A. C., & Park, J. Y. (2014). Peer
Grading in a MOOC: Reliability, Validity, and
Perceived Effects. Online Learning: Official Journal
of the Online Learning Consortium, 18(2).
McGarr, O., & Clifford, A. M. (2013). ‘Just enough to
make you take it seriously’: exploring students’
attitudes towards peer assessment. Higher Education,
65(6), 677-693.
McMullan, M., Endacott, R., Gray, M. A., Jasper, M.,
Miller, C. M., Scholes, J., & Webb, C. (2003).
Portfolios and assessment of competence: a review of
the literature. Journal of advanced nursing, 41(3),
283-294.
Nielsen, J. (1994). Usability inspection methods. In
Conference companion on Human factors in
computing systems (pp. 413-414). ACM.
Nonaka, I., & Takeuchi, H. (1995). The knowledge-
creating company: How Japanese companies create
the dynamics of innovation. Oxford university press.
Ostashewski, N., & Reid, D. (2012). Delivering a MOOC
using a social networking site: the SMOOC Design
model. In Proc. IADIS International Conference on
Internet Technologies & Society, (2012), 217-220.
O'Toole, R. (2013) Pedagogical strategies and
technologies for peer assessment in Massively Open
Online Courses (MOOCs). Discussion Paper.
University of Warwick, Coventry, UK: University of
Warwick. Retrieved from:
http://wrap.warwick.ac.uk/54602/
Piech, C., Huang, J., Chen, Z., Do, C., Ng, A., & Koller,
D. (2013). Tuned models of peer assessment in
MOOCs. arXiv preprint arXiv:1307.2579.
Prümper, J. (1997). Der Benutzungsfragebogen
ISONORM 9241/10: Ergebnisse zur Reliabilität und
Validität. In Software-Ergonomie’97 (pp. 253-262).
Vieweg+ Teubner Verlag.
Sánchez-Vera, M. M., & Prendes-Espinosa, M. P. (2015).
Beyond objective testing and peer assessment:
alternative ways of assessment in MOOCs. RUSC.,
12(1). pp. 119-130.
Sandeen, C. (2013). Assessment’s place in the new
MOOC world. Research & Practice in Assessment, 8
(1), 5-12.
Sitthiworachart, J., & Joy, M. (2004). Effective peer
assessment for learning computer programming. In
ACM SIGCSE Bulletin (Vol. 36, No. 3, pp. 122-126).
ACM.
Suen, H. K. (2014). Peer assessment for massive open
online courses (MOOCs). The International Review of
Research in Open and Distance Learning, 15(3).
Topping, K. (1998). Peer assessment between students in
colleges and universities. Review of Educational
Research, 68(3), 249-276.
Wolf, K., & Stevens, E. (2007). The role of rubrics in
advancing and assessing student learning. The Journal
of Effective Teaching, 7(1), 3-14.
Yin, S., & Kawachi, P. (2013). Improving open access
through prior learning assessment. Open Praxis, 5(1),
59-65.
Yorke, M. (2007). Assessment, especially in the first year
of higher education: Old principles in new wrapping.
In REAP International Online Conference on
Assessment Design for Learner Responsibility.
Yousef, A. M. F., Chatti, M. A., Ahmad, I., Schroeder, U.,
& Wosnitza, M. (2015a, accepted). An Evaluation of
Learning Analytics in a Blended MOOC Environment.
The European MOOC Stakeholder Summit 2015.
Yousef, A. M. F., Chatti, M. A., Wosnitza, M., &
Schroeder, U. (2015b). A Cluster Analysis of MOOC
Stakeholder Perspectives. RUSC. Universities and
Knowledge Society Journal, 12(1), 74-90.
Yousef, A. M. F., Chatti, M. A., Schroeder, U. &
Wosnitza, M. (2015c, in press). A Usability
Evaluation of a Blended MOOC Environment: An
Experimental Case Study. The International Review of
Research in Open and Distributed Learning.
Yousef, A. M. F., Chatti, M. A., Schroeder, U., Wosnitza,
M., Jakobs, H. (2014a). MOOCs - A Review of the
State-of-the-Art. In Proc. CSEDU 2014 conference,
Vol. 3, pp. 9-20. INSTICC, 2014.
Yousef, A. M. F., Chatti, M. A., Schroeder, U., Wosnitza,
M. (2014b). What Drives a Successful MOOC? An
Empirical Examination of Criteria to Assure Design
Quality of MOOCs. In Proc. ICALT 2014, 14th IEEE
International Conference on Advanced Learning
Technologies, 44-48.
TheEffectofPeerAssessmentRubricsonLearners'SatisfactionandPerformanceWithinaBlendedMOOCEnvironment
159