Numerous studies ((Boscardin and Penuel, 2012;
Moss and Crowley, 2011; Kay and LeSage, 2009;
Bruff, 2009; Moredich and Moore, 2007)) have re-
ported that the use of classroom response systems in
the classroom can improve student engagement (to
the benefit of student learning) and (as noted previ-
ously) provide an opportunity for both students and
instructors to receive important feedback. Although
virtually all surveyed materials reinforce that students
are satisfied, on more than one occasion ((Blasco-
Arcas et al., 2013; Webb and Carnaghan, 2006)) it
has been noted that the studies that report learning
improvements might be observing an effect associ-
ated with improved interactivity in the classroom, and
cannot conclusively demonstrate that the classroom
response system is actually required to achieve this
effect. This reasonable consideration notwithstand-
ing, the use of classroom response system as a tool to
engage students remains largely undisputed. Class-
room response systems have also been used to suc-
cessfully identify students that are struggling (Liao
et al., 2016), and Porter et al. showed that perfor-
mance in classroom response system early in the term
was a good predictor of students’ outcomes at the end
of the term (Porter et al., 2014).
Unfortunately it must also be acknowledged that
there is evidence that the adoption of a classroom
response system could present a barrier to students
(which could, naturally, negatively interfere with
knowledge retention). Draper and Brown (2004) re-
ported that some students expressed that the system
could actually be a distraction from the learning out-
come. Furthermore the review by Kay and Lesage
(2009) cited works that discussed the potential for in-
class discussions to actually confuse students by ex-
posing students to differing approaches/perspectives.
This is, naturally, a potential pitfall for any activity
that prompts in-class discussion, and given the nu-
merous reports of the potential advantages associated
with classroom response systems, we definitely feel
that there is sufficient evidence to motivate the inves-
tigation of these systems as a tool for improving con-
tent retention.
It should be noted that a number of reviews
(Boscardin and Penuel, 2012; Kay and LeSage, 2009;
Judson and Sawada, 2002) have noted that much of
the research into the benefits and drawbacks associ-
ated with the use of classroom response systems has
been qualitative and/or anecdotal, and that there are
relatively few studies using control groups and quan-
titative analyses. The authors believe this study to be
among the first to offer a quantitative assessment of
the use of classroom responses systems as a tool to
improve content retention (as opposed to a tool ex-
plicitly used for improving engagement or providing
student feedback). It should, however, be specifically
noted that the approach described by Brewer (2004)
in the biology faculty at the University of Montana
noted that although response system questions were
presented to students during the class in which the
materials were presented, correct answers would not
be revealed to the students until the following class.
Although this practice could conceivably improve re-
tention as well, the express purpose of using the re-
sponse system was described in that study to be feed-
back (for both the instructors and students), not an
improvement to retention. Similarly, Caldwell (2007)
does not mention retention specifically but does de-
scribe a ”review at the end of a lecture” - this could
also conceivably improve retention if these questions
pertained to the beginning of a particularly lengthy
lecture.
It should also be emphasized that our paper is con-
cerned only with the potential applications of class-
room response systems to the problem of content re-
tention; although several other studies have looked
at classroom response systems for the retention of
students in computer science programs, this is not
directly related to the problem of content retention.
Porter, Simon, Kinnunen, and Zazkis (2013 & 2010),
for instance, indicated that they used clickers as one
of the best practices for student retention, but it is not
clear how this class response system is used and how
it is affecting content or knowledge retention. Fur-
thermore, unlike Tew and Dorn (2013), we do not aim
to develop general instruments for assessment. Our
approach is ad-hoc with the specific objective of de-
termining if there is measurable evidence that a class
response system can help improve retention of con-
tent and knowledge by students.
A related issue for which there have been sev-
eral studies (albeit with conflicting results) concerns
whether or not the use of classroom response sys-
tems can improve student performances on final ex-
ams. Diana Cukierman suggested that studying the
effect of a classroom response system on outcomes
such as final exam scores may be infeasible (Cukier-
man, 2015) and an experiment by Robert Vinaja that
used recorded lectures, videos, electronic material,
and a classroom response system did not demonstrate
an impact of these practices on grade performance
(Vinaja, 2014). Contrarily, Simon et al. demonstrated
in a CS0 course that peer instructed subjects outper-
formed those who are traditionally instructed (Simon
et al., 2013), and Daniel Zingaro confirmed this find-
ing but in a CS1 context (Zingaro, 2014). Zingaro
et al. went further to show that students who learn
in class retain the learned knowledge better than stu-
CSEDU 2017 - 9th International Conference on Computer Supported Education
18