In performing a case-by-case analysis to get a
better picture on how rankings could have affected
students, we also identified cases that could be
problematic. Of course, even in the same sub-
groups, we found a range of different behaviors. For
example, there were students who discarded the
ranking information completely, students who tried
only in the beginning to keep up with the others, and
students who pursued a higher ranking until the end
of the activity. Introducing ranking information into
a learning activity could, in some cases, cause
negative results. Two students of the Indifferent
subgroup who expressed a negative opinion about
ranking in the questionnaire said that they did not
like the competitive aspect injected in the activity by
the rankings, and this was the reason they ignored
them completely. Checking the activity logs for
these two students and their scores in post-test, we
saw that they were both above the average of their
subgroup, while one of them was also in the top 10
positions in different usage metrics. On the opposite
side, a student of the InFavor group actively tried to
stay in the top positions throughout the activity and
she managed to do so for most of the metrics of the
study (Logins: 66; Views: 43; Reviews: 3; Peer
Score: 3.53; Post-test: 7.80). So, while this student
had visited the learning environment double the
times of the InFavor average and had viewed (or at
least visited) 43 out of the total 51 available peer
work, she only submitted the minimum number of
reviews and was well below average in the scores
she received on her initial answers from peers and
on the post-test from the two raters. From our point
of view, this student lost the actual focus of the
activity (acquire domain knowledge and develop
review skills) and focused on improving her
rankings. The short periods of time recorded for
each peer work view for this student also suggest
that her engagement in the activity was superficial.
This behavior is very close to what Baker et al.,
(2008) define as “gaming the system”, namely an
effort to succeed by actively exploiting the
properties of a system, rather than reaching learning
goals.
These three cases were the extremes in our
analysis, but they still provide insights into how
under certain circumstances ranking information
could have the opposite effect an instructional
designer is looking for. In addition to this, students
also mentioned that a low ranking in the Peer Score
would alarm them into improving their initial work,
while some students also mentioned that a high
position in this metric was reassuring. The issue here
lies on the fact that sometimes students’ and raters’
opinions about the quality of a work do not match.
Students that relied only on the ranking information
may be misled.
In conclusion, providing students with ranking
information could be beneficial for them, especially
when students develop a positive attitude towards
having this information. In these cases, students’
intrinsic motivation is increased and engagement is
enhanced. However, attention is also needed on how
students act during the learning activity. In certain
cases, chasing after rankings could cause negative
attitudes or superficial engagement.
REFERENCES
Anderson, L. W., & Krathwohl, D. R. (Eds.) (2001). A
Taxonomy for Learning, Teaching, and Assessing: A
Revision of Bloom's Taxonomy of Educational
Objectives. NY: Longman.
Baker, R., Walonoski, J., Heffernan, N., Roll, I., Corbett,
A., & Koedinger, K. (2008). Why Students Engage in
“Gaming the System” Behavior in Interactive
Learning Environments. Journal of Interactive
Learning Research. 19 (2), pp. 185-224. Chesapeake,
VA: AACE.
Demetriadis, S. N., Papadopoulos, P. M., Stamelos, I. G.,
& Fischer, F. (2008). The Effect of Scaffolding
Students’ Context-Generating Cognitive Activity in
Technology-Enhanced Case-Based Learning.
Computers & Education, 51 (2), 939–954. Elsevier.
Denny, P. (2013). The effect of virtual achievements on
student engagement. In Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems,
ACM, 2013.
Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011).
From game design elements to gamefulness: defining
"gamification". In Proceedings of the 15th
International Academic MindTrek Conference:
Envisioning Future Media Environments (MindTrek
'11). ACM, New York, NY, USA, 9-15.
Goldin, I. M., & Ashley, K. D., (2011). Peering Inside
Peer-Review with Bayesian Models. In G. Biswas et
al. (Eds.): AIED 2011, LNAI 6738, pp. 90–97.
Springer-Verlag: Berlin.
Hansen, J., & Liu, J. (2005). Guiding principles for
effective peer response. ELT Journal, 59, 31-38.
Li, L., Liu, X. & Steckelberg, A.L. (2010). Assessor or
assessee: How student learning improves by giving
and receiving peer feedback. British Journal of
Educational Technology, 41(3), 525–536.
Liou, H. C., & Peng, Z. Y. (2009). Training effects on
computer-mediated peer review. System, 37, 514–525.
Lundstrom, K., & Baker, W. (2009). To give is better than
to receive: The benefits of peer review to the
reviewer’s own writing. Journal of Second Language
Writing, 18, 30-43.
CSEDU2015-7thInternationalConferenceonComputerSupportedEducation
146