Comparing Tool-supported Lecture Readings and Exercise Tutorials in
Classic University Settings
Tenshi Hara
1
, Felix Kapp
2
, Iris Braun
1
and Alexander Schill
1
1
Chair of Computer Networks, Faculty of Computer Science, Technische Universit¨at Dresden, Dresden, Germany
2
Chair of Learning and Instruction, Department of Psychology, Faculty of Science,
Technische Universit¨at Dresden, Dresden, Germany
Keywords:
Audience Response System, Virtual Whiteboard, Q&A System, Discussion System, Panel, Comparison,
Lecture Reading, Exercise Tutorial, Auditorium, AMCS.
Abstract:
Teaching in classic courses offers too little interaction between docents and students and should be improved.
Addressed approaches include a range from Simple Voting Systems to Clickers and Audience Response Sys-
tems, and interaction and Student motivation may be improved in them. However, different university course
settings are affected in different ways by these systems. Therefore, this paper presents a comparison of a
selected range of these systems (implemented as tool kits) within two course settings, namely readings and
tutorials. These tools are Audience Response Systems, Question and Answer Systems (Q&A Systems), Dis-
cussion Systems (Panels), as well as Virtual Whiteboard Feedback Systems. A synopsis of feasibility for
different settings is provided and concluded with important results on the distinguishability of Q&A Systems
and Panels.
1 INTRODUCTION AND
RELATED WORK
University courses at German universities aim to ex-
pand students’ knowledge through the structured pre-
sentation of expertise by a docent, which goes beyond
textbook knowledge, and by guiding students through
the knowledge acquisition process. Teaching in clas-
sic courses has been criticised for offering too little
interaction between docents and students. Learning
as an active, constructive and highly individual pro-
cess (Seel, 2003) is nearly impossible in huge read-
ings and can be improved in most of the small course
units, as well. As a consequence of missing interac-
tivity and engagement, many students fail at learning
- they do not manage to build adequate mental models
of the domain taught.
There are several approaches to increase interac-
tivity in classes. The spectrum ranges from simple
voting systems (Duncan, 2006) to the method of Peer
Instruction (Mazur, 1997). A large variety of systems
are subsumed under the concept “Personal Response
Systems” (Moss and Crowley, 2011), Audience Re-
sponse Systems” (ARS) (Caldwell, 2007) or “Click-
ers” (Bradyet al., 2013). ARS provide feedback to the
docent by giving the audience the possibility to par-
ticipate during the course unit by voting on questions.
By presenting questions during the course unit stu-
dents get more involved in the lecture and the docent
in turn gets some information about the audiences
knowledge and attitude. Almost all of these systems
work as follows: before starting the course unit, the
docent defines one or more questions which are then
presented on a screen during the course unit; the stu-
dents are asked to answer via specialised technical de-
vices (Clickers) or their smartphones. All answers are
aggregated and immediately presented on the screen.
The docent can include the audience’s answers into
the lecture, provide timely feedback, or adapt the lec-
ture to special interests or needs. Some studies show
that ARS are capable of increasing the interactivity
in lectures (Mayer et al., 2009). A core instructional
component of projects with ARS are learning ques-
tions (Mayer et al., 2009), (Weber and Becker, 2013)
and live feedback features as in (Feiten et al., 2013).
Various studies have shown, that ARS lead to
an increase of motivation (e.g., (Prather and Bris-
senden, 2009)) as well as to an increase of achieve-
ment (Duncan, 2006). According to (Caldwell, 2007)
and (Beatty et al., 2006) ARS with questions can
a) direct attention and raise awareness, b) stimulate
cognitive processes, and c) help evaluating progress.
When instructional designers have these underlying
244
Hara T., Kapp F., Braun I. and Schill A..
Comparing Tool-supported Lecture Readings and Exercise Tutorials in Classic University Settings.
DOI: 10.5220/0005451402440252
In Proceedings of the 7th International Conference on Computer Supported Education (CSEDU-2015), pages 244-252
ISBN: 978-989-758-108-3
Copyright
c
2015 SCITEPRESS (Science and Technology Publications, Lda.)
processes in mind when designing ARS questions,
they can provide feedback to the students and can be
used by the instructor as a source for feedback, as well
(e.g. (Lantz, 2010)).
(Lantz, 2010) reviewed several studies about
Clickers and concludes that providing questions
within university lectures with the help of ARS has
effects on attention, attendance, class preparation and
depth of processing. Again, the questions provide im-
mediate feedback to the students and can be used by
the instructor as a source for feedback.
Existing Clicker Systems provide the possibility
to increase the interactivity in university classes. In
the following contribution we compare two systems
which go beyond the classical Clicker System fea-
tures with regard to their usefulness for the university
readings and tutorials. The systems used in the exist-
ing literature vary with respect to their functions and
the technical possibilities. We consider it useful to
differentiate these functionalities into four categories:
Audience Response Systems, Question and Answer
Systems, Discussion Systems and Virtual Whiteboard
Feedback Systems. The aim of the paper is twofold,
as we aim to 1) present two systems developed in or-
der to support students in university courses, and 2)
emphasise that different content and course unit set-
tings require tools to integrate distinct functions in or-
der to help docents and students in successfully en-
hancing mastery of the learning process.
2 SETTING
As we intend to analyse a very focussed set of teach-
ing and learning environments as well as situations,
and draw conclusions on a comparison thereof, the
considered settings shall be briefly outlined in this
section.
The research conducted is based at a German uni-
versity and topics are taught through readings, tu-
torials, practicals, as well as combinations thereof.
All three types are based on units spanning 90 min-
utes. Purposed for knowledge presentation, readings
present a learning environment in which an arbitrary
number of a few to up to several hundred mostly pas-
sive students follow a docent presenting a subject. Tu-
torials and practicals are favourable means of knowl-
edge consolidation. With respect to the knowledge
presented in the readings, tutorials incorporate theo-
retical repetition and continued derivation, whereas
practicals focus on the practical application of said
knowledge. Typically, tutorials are designed to ac-
commodate seven to up thirty students, whereas prac-
ticals often are designed for arbitrarily sized groups
partitioned into units of three to eight students.
Due to a low utilisation degree of practicals at our
alma mater, we want to focus on courses consisting
of weekly readings accompanied by weekly or fort-
nightly tutorials. In this focus, both can be consid-
ered for tool-less and tool-supported realisation. The
tool-less conduct shall be considered “classic”, signi-
fying allowed presentation means are limited to voice,
books, blackboard, pointer, demonstrators, as well as
overhead and LCD projectors. Tool-supported reali-
sations add to or replace parts of these means with in-
teractive tools allowing two-way interactions between
students and docents. The tools used are integrated
into the curriculum and their primary objective is the
enhancement of knowledge presentation and/or repe-
tition.
Within the outlined focus, an intermediate reali-
sation is of special interest for us, namely a course
that can be conducted in a “classic” setting, but is
amended by tool-support. I.e. these tools are not
mandatory in order to achieve the course’s goal.
Hence, this realisation shall be defined as “pseudo-
classic”.
Table 1: Test settings for the two systems AMCS and
ETTK.
Topic Dur.
n
AMCS
Lecture 1 Psychology 90min
30
Lecture 2
Cloud
Computing
90min
18
Lecture 3
Computer
Networks
90min
120
Lecture 4 Economics 90min
200
Lecture 5 Economics 90min
197
ETTK
Exercise
Tutorial 1
Computer
Networks
10×
90min
13-26
Exercise
Tutorial 2
Computer
Networks
7×
90min
7-14
In our investigation, we conducted tests in five
readings with 15 to 180 students and seventeen tu-
torials with 7 to 26 students. An overview is provided
in Table 1.
For the reading environment we utilised our sys-
tem Auditorium Mobile Classroom Service (AMCS)
1
(Kapp et al., 2014b), (Kapp et al., 2014c), (Kapp
et al., 2014a) with its integrated ARS and meta-
cognitive activation features. The meta-cognitive
prompts allow easy addressing of different target
groups or even individual students within the audi-
ence.
1
http://goo.gl/2UhsFn
– accessed 25 March 2015
ComparingTool-supportedLectureReadingsandExerciseTutorialsinClassicUniversitySettings
245
Table 2: Synopsis of the technical comparison.
Within the environment of tutorials two proto-
types were utilised, namely RNUW
2
with real time
ARS and Q&A System, and ExerciseTool
3
with time
decoupled ARS, Q&A System and Whiteboard. For
both prototypes the Q&A System also served as a
Panel as discussed later.
As for the applicability of the results and easier
reading experience, we shall pool RNUW and Ex-
erciseTool as one (virtually single) tutorial tool kit
(“ETTK”).
From a more technical perspective, the four en-
hancing tools elevating our classic setting to pseudo-
classic tested and discussed within this paper are: Au-
dience Response Systems, Virtual Whiteboard Feed-
back Systems, Question and Answer Systems, as well
as Discussion Systems.
3 COMPARISON
Before outlining the comparison, we wish to forestall
that our results are expectable; however,to our knowl-
edge no original work has yet provided a diligent but
routine piece of work on result provisioning.
The considered systems with their embedded tools
naturally differ from each other due to their design
and implementation, making a direct comparison dif-
ficult. However, on a conceptual level, suitable com-
parison conclusions can be drawn along the techni-
cal tools, or along classifications such as instant feed-
3
http://exercisetool.inf.tu-dresden.de/
– accessed 2 September 2014
Table 3: Synopsis of the classification-based comparison.
back, questions during the course (prepared lecturer’s
questions during the course), in-course collaboration
(with Whiteboards), as well as questions asked by the
students. The technical comparison suffers from par-
tial indistinguishability of result allocation, whereas
the classification comparison discriminates the tech-
nical aspects. A synopsis of the technical compar-
isons is given in Table 2, and one of the classification-
based comparisons in Table 3.
3.1 Audience Response System (ARS)
ARS present the audience of a course with the abil-
ity of providing direct and indirect feedback on the
course’s setting or its content. We considered two
types of ARS, one providing presentation feedback
(speech parameters) to the docent, and one for pro-
filing of the audience and providing targeted hints to
individual audience members. Both types are avail-
able concurrent to the course’s presentation.
The first type allows students to provide instant
feedback on parameters of the docent’s presentation,
i.e. speed and volume (in the tutorials we additionally
tested explanation clarity). The university’s semestral
evaluation provides deferred feedback and is rarely
finished and published before the end of the semester,
hence the provided feedback does not benefit the eval-
uating students, but their successors in the next exe-
cution of the course. Therefore, motivation to pro-
vide qualitatively suitable feedback for the evaluation
is low. Having the feedback available instantaneously
allows docents to react in a timely manner.
The second type allows docents to prepare a set
CSEDU2015-7thInternationalConferenceonComputerSupportedEducation
246
of timed questions and surveys linked to the presen-
tation. These questions and surveys allow – when an-
swered by the students automated student profiling
which in turn allows individualised system responses
and learning experiences for each student. They also
allow identification of learning demands, which in re-
turn helps the students focus on knowledge deficits
when learning. Questions can range from a simple
“Why are you attending this course”, targeted at pro-
viding the docent with a gross overview of the com-
position of their audience, to individualised control
questions like “Earlier you stated you were having
difficulties to understand the DNS. Based on what has
just been presented, [...]”. Surveys present a distinct
type of questions, i.e. linking the feedback of the en-
tire audience, or a subset thereof, to form an opin-
ion cross section which can be presented to the audi-
ence. E.g. the result of the before mentioned audience
composition question could be presented to the audi-
ence in order of providing students a sense of cohe-
sion or (intentional) rivalry amongst different degree
programmes.
With respect to instant feedback, one of the main
incentives is anonymity while providing feedback.
The inhibition threshold to provide feedback espe-
cially such that may reflect negatively on oneself – is
considerably lowered when anonymity is introduced.
This effect is considerably more noticeable in larger
audiences, i.e. in readings compared to tutorials.
Another important aspect that could be verified
in both environments is the immediacy of feedback
and reaction. However, the extent of the immediacy
varies between both environments. As the investi-
gated readings’ presentations were based on Power-
Point or Keynote, the ARS-based feedback was on
a per-slide basis, meaning that students were able to
provide feedback with the ARS at any time, but the
feedback values were coupled to individual slides and
reset after each change of slides. The docents as well
as the students were able to see the ARS activity in
real time, providing a topical audience report. The
docents were at liberty to react to the feedback at their
own discretion. However, processing of and reacting
to the feedback proved to be most practical just before
changing slides. This is vested in the available adver-
tence of the docents who in general were occupied
with the presentation, as well as the expected reaction
time on the side of the students.
Ongoing availability of feedback was tested, as
well. This however proved to make the correlation
of feedback to presentation challenging. E.g. an “un-
able to follow” feedback could be submitted by the
audience, but the docent’s reactions could be delayed
until moments later; i.e. in readings a few slides.
This is important to note as within the tutorials per-
slide basis was infeasible, as the investigated tutori-
als mainly used blackboards as their means of pre-
sentation. Hence, we investigated several different
possibilities of feedback processing and reaction. An
exemplary time decoupled ARS voting screen is de-
picted in Figure 1, where students could vote on the
performance of the docent. The aggregated voting re-
sults are presented in Figure 2.
Figure 1: Students’ voting screen allowing feedback on the
docent’s performance, as well as additional feedback.
Figure 2: Docents’ result screen allowing acknowledge-
ment of their performance in the evaluated tutorial.
Hence, we investigated real time ARS feedback
on the time bases of 5, 15 and 30 minutes in the tuto-
rials. An exemplary screenshot of the student view
is provided in Figure 3. The Q&A System also de-
picted will be discussed further in Subsection 3.2 and
Subsection 3.3.
Figure 3: Students can use ARS (top) and Q&A System
(bottom) functions.
ComparingTool-supportedLectureReadingsandExerciseTutorialsinClassicUniversitySettings
247
The time decoupled ARS proved to be impractical
for the requirements of the audience. Students were
able to provide feedback for the tutorial units on the
unit-level, providing the opportunityof improvements
in the next unit; but, motivation to participate was as
low as with the paper-based semestral evaluations, if
not even lower. We therefore changed the feedback
to a docent evaluation where students could visually
rate the docent’s performance in each unit. Unfor-
tunately, due to a small number of samples (3), the
performance result are of limited significance.
For the time-based ARS feedback (as a reminder:
bases 5, 15 and 30 minutes), our ETTK allowed an
analysis of docent and student acceptance. However,
strict obedience to the fixed time intervals proved im-
practical. Varying time consumption of different exer-
cise tasks made a reasonable feedback correlation ex-
tremely challenging, especially when having the do-
cent face the blackboard and forcing them to attend
to the feedback by turning around, interrupting line
of thought. Furthermore, it was hardly possible for
the students to appreciate any feedback-based change
in the presentation when the correlation to the orig-
inal reason for the feedback was surpassed or lost.
Therefore, we introduced a “reset button” which as-
tonishingly proved to be very practical. It eliminated
the time-constraints of the system while still allowing
attributable reactions to the provided feedback. Nev-
ertheless, the point in time of the reaction proved to
be crucial. Having the docents react to feedback as
soon as they realised there was feedback irritated the
students as reactions in the midst of a line of though
distracted both, the students and the docents. Next,
having the docent react to feedback between differ-
ent tasks was practical, but some students judged this
referred response as being too slow; however, it gen-
erally improved acceptance. Lastly, we investigated
having the docent react to the feedback as soon as a
line of though was finished and positioning of the do-
cent (as a person relative to the audience) allowed per-
ception of the feedback. This compromise proved to
be worthwhile to the students as irritations were lim-
ited since docents tend to emphasise changes of lines
of thought by changes in intonation, speed, etc. any-
way.
Back to the in-course learning questions, student
participation and attentiveness can be improved by
presenting students with those during a course unit.
An example of such questions in combination with
instant feedback features is shown in Figure 4.
In readings in-course questions can provide an in-
dividualised learning experience for students, even
though the docent does neither actively, nor inten-
tionally address individual students over the course
Figure 4: Individualised learning question arranged with
single choice answers. The question on top has been an-
swered incorrectly; an individualised prompt is given. In
parallel, feedback on speed (bottom left) and volume (bot-
tom right) of the presentation can be provided.
of a reading unit. The in-course questions must be
prepared beforehand and currently cannot be gener-
ated out of the course material automatically. How-
ever, once questions are stored in the ARS along-
side the course material in our AMCS prototype
they were stored within the PowerPoint and Keynote
files as notes –, maintaining/revisingthem can be con-
ducted alongside maintenance/revision of the actual
course material. As mentioned earlier, AMCS allows
meta-cognitive prompts. These are generated based
on the answers submitted by students, and their pro-
files. A docent may prepare prompts for a selected
subset of the audience, which is practical if the au-
dience consists of students from different degree pro-
grammes, and individual prompts are not targeted at
all programmes. Other targetable preconditions in-
clude whether students attend the course because of
it being mandatory to their schedule or out of actual
interest; such students are more likely susceptible for
prompts on research proposals or available thesis top-
ics.
In general, students reacted positively to the ac-
tivating aspects of in-course questions and deemed
them a valuable addition to readings. For example,
students of the first reading (Psychology) were asked
if they considered the functionalities (learning ques-
tions, messages and feedback to the lecturer) useful.
They mostly agreed (M = 4.19, SD = .68, n = 22;
scale from 1 “I do not agree” to 5 “I fully agree”).
In line with that statement participants of the fourth
CSEDU2015-7thInternationalConferenceonComputerSupportedEducation
248
reading (Economics) rated learning questions within
the lecture as very useful (refer to Figure 5. Unfor-
tunately, with a rising number of student attendance,
server load exponentially grows, which yielded severe
performance problems in the third reading (Computer
Networks), but could be purged later on. Basic learn-
ing questions and direct feedback do not cause these
problems, but frequent database or in-memory checks
on complex model correlations (beyond “correct” or
incorrect answers) for individual messages to be sent
to all system users can be disadvantageous.
Figure 5: Students’ usefulness assessment on Likert scale
(1 “not useful” to 5 “very useful”) of learning questions,
messages and feedback to the lecturer. (n = 78).
For tutorials, utilisation of in-course questions is
ambiguous. On one hand, they allow continued acti-
vation of fast/good students by presenting them addi-
tional learning questions as soon as they have finished
topical tasks, hence reducing their idle times. On the
other hand, this additional demand channel
4
endan-
gers overall attentiveness as the docent continues ask-
ing verbal questions towards the general audience or
individuals within the audience. No matter what psy-
chology modelone believesin, either multi tasking, or
single tasking humans, the additional demand channel
either takes from the shared attention pool and gener-
ates additional administrative effort in the multi task-
ing model, or it reduces the available attention spread
in the single tasking model. This theoretical demur
could be confirmed for in-course questions within a
presence unit. However, stretching the definition of
in-course, prepared questions for off-campus learn-
ing proved very useful. Especially the combination
of before and after unit questions allows individu-
alised identification of learning demands. Within our
ETTK we presented students with confidence ques-
tions before each unit, simply asking them whether
they felt confident in successfully finishing each sin-
gle exercise tasks. For this, only the headings of
the tasks were presented with a 5-step Likert scale
(ranging “very uncertain” (1) to “very confident” (5)).
4
In a psychological perception model, asking questions
yield an answer demand. In the model, each means of ques-
tion asking is a demand channel.
After each tutorial unit the students were then pre-
sented the same questionnaire, allowing the ETTK
system an automated comparison of before and af-
ter unit confidence in a first step, and deriving indi-
vidual learning demand for each student in a second
step. Ultimately, a third step generated recess infor-
mation on the course group’s learning/understanding
progress, providing repetition proposals to the do-
cent. An exemplary individualised learning demand
appraisal presented to a student is depicted in Fig-
ure 6.
Figure 6: Individualised learning demand appraisal pointing
out deficits the student should focus on.
3.2 Virtual Whiteboard Feedback
System (Whiteboard)
Whiteboards are a means of collaborative develop-
ment and provisioning of sketch areas. In our setup
we utilised a Whiteboard for student hand-ins during
tutorials; utilisation in readings would not be feasible
due to the size of the audience as well as their focus
on passive presentation. The expectable amount of
feedback would be more than challenging for the do-
cent to comprehend. Of course, Whiteboards can be
used as a scratch pad by reading attendants, but this
scenario was not considered in our tests.
For a given task students would normally prepare
hand-ins which in turn would be either selectively
or in total put to discussion among the group, dis-
tributed among learning groups, effectively permut-
ing a course’s hand-ins within the course, or evalu-
ated by the docent out of course units and have the re-
sults presented in the next unit. Loosening these con-
straints, Whiteboards allow a quasi-real time anony-
mous discussion of hand-ins without handling paper
submissions. The docent can either share the evalua-
tion process with the group, which would be equal to
the prior mentioned discussion, or they could evalu-
ate submitted hand-ins as they appear in the system
and mark “noteworthy” submissions for later con-
densed discussion. This helps conserve valuable tu-
torial time.
Although Whiteboards allowing students to sub-
mit hand-ins are infeasible for readings, it can be ar-
gued that they can be utilised for other forms of feed-
back submissions. While this is true for students tak-
ing down notes and sharing them among each other,
ComparingTool-supportedLectureReadingsandExerciseTutorialsinClassicUniversitySettings
249
they could only be utilised for time-decoupled hand-
ins from the docents’ point of view. However, in our
setting tutorials are the environment for hand-ins. In
addition, having other feedback like questions sub-
mitted via Whiteboard inhibits automated handling in
database archiving, etc., which is possible with ARS
and Q&A Systems. Consequently, Whiteboards in
readings are infeasible and their task can be satisfied
by ARS and Q&A Systems.
With respect to tutorials Whiteboards can provide
additional, valuable feedback possibilities. As dis-
cussions on topics are targeted, the ability to provide
feedback and/or hand-ins that need not adhere to text-
based restrictions (Q&A System) or pre-selection val-
ues (ARS) is a valuable amendment to tutorials. Stu-
dents can swiftly hand-in tables, sequence and state
diagrams, or other UML and sundry diagrams. As
described in Section 2, the hand-ins are designed to
be discussed in a timely manner, so the aspect of au-
tomated handling, etc. as discussed for readings is
not as important. Generally, Whiteboard submissions
would be erased after an tutorial unit and its associ-
ated discussions had concluded.
3.3 Question and Answer System (Q&A
System) and Discussion System
(Panel)
We assume the concepts of Q&A Systems and Panels
are well known and do not require a definition here,
so we can concentrate on the student questions.
Since 2011 we are utilising an advanced combi-
nation of Q&A System and Panel with Auditorium
5
(Beier et al., 2014), (Beier, 2014), which was origi-
nally developed as a student project. A simple Q&A
System was tailored to tutorials by including it into
an exercise tool kit with ARS as well as Whiteboard.
This Q&A System allowed tutorial participants to
anonymously ask questions, and up- or down-vote
questions submitted by other students. The docent
would – time permitting – answer the highest ranked
questions at the end of an tutorial unit. However,
actual deployment of the Q&A system lead to some
modifications to this idea that shall be discussed here
(and in Section 4), notably its “misuse” as a Panel.
Based in the design of readings, only a few ques-
tions can actually be addressed during a reading unit.
Tacitly agreed upon, only imperative questions of ut-
most importance are asked during the reading, as this
interrupts the docent. Students tend to note down
questions and approach the lectern after a reading unit
5
https://auditorium.inf.tu-dresden.de
– accessed 24 March 2015
has concluded.
Deriving from the so described situation, Q&A
Systems can only help to a certain extend. However,
having a Q&A System serve the Q as well as the A
aspect is surely infeasible for readings as the A as-
pect still contributes to interruptions of the reading.
However, the Q aspect can serve well as it allows stu-
dents to immediately note down questions they might
have. These questions can then either be answered
by the docent later (qualified answer), or they can be
answered during the reading by other students who
might have already understood the topic (or think they
have) of the question or aspects thereof (solicited an-
swer). Solicited answers can later still be revised or
amended by qualified answers. Hence, Q&A Systems
can augment readings, but only if the Q and A aspects
are loosely coupled in terms of qualified answers in a
timely manner. Our Auditorium system addressed
these aspects in the classic way a forum would, by
also allowing vested discussions on topics.
Combining the aspects of system knowledge on
individualised learning, our system was able to join
assumed knowledge on individual learning progress
with student question demand. AMCS fostered stu-
dents to ask relevant questions by sending push no-
tifications to their smartphones. Based on their indi-
vidual profiles, AMCS sent messages, e.g. “You still
had some problems with [topic]. You should ask the
docent about [identified deficit].
On the side of tutorials similar considerations as
for readings can apply. Tutorials are designed for stu-
dents to actively engage in the topic materials. As
course contents are mandated by the predefined cur-
riculum, time constraints mostly affect tutorials and
limit the presentation/discussion ratio; some coverage
of subject matter is mandatory, as all units are limited
to the time frame of 90 minutes. So the basic prob-
lem is finding a solution to how many questions can
be answered within 90 minutes without endangering
the goal of covering all required topics. In classic tu-
torials students would raise their hands and the do-
cent would either fairly apply FIFO
6
processing, or
unfairly by arbitrarily (i.e. at their discretion) picking
students. This can lead to stress and disappointment
for the students as questions perceived as important
might not be addressed or sufficiently answered.
Having introduced Q&A Systems into the de-
scribed situations allowed maintaining a comprehen-
sive list of submitted questions students either don’t
want to discuss openly because they are ashamed of
exposing themselves, or deem important enough to
ask, but not important enough to be addressed imme-
6
First In First Out — Students are processed in the order
of them raising their hands
CSEDU2015-7thInternationalConferenceonComputerSupportedEducation
250
diately. Sometimes time constraints can force the do-
cent to only allow a few questions at the end of the
tutorial. Whether all questions can be answered is a
secondary concern.
Introducing a fair rule for immediate and delayed
answering of question might just do the magic. By
allowing all questions – once again anonymously – to
be seen and voted on by all students allows swift ag-
gregation of issues important to the majority of stu-
dents. In means of self-regulating communities as
e.g. in stackoverflow
7
students are allowed to up-
or down-vote questions, and therefore deciding them-
selves, which questions are “worthy” (important) and
“non-sensical” (unimportant). Permitting time at the
end of the tutorial unit, the docent can then immedi-
ately answer the top-X (e.g. 5) questions, while post-
poning answers to the other questions to an off-unit
discussion. – Refer back to Figure 3 for an exemplary
voting.
However, during our tests we observed that stu-
dents would tend to not actually vote down questions,
but use a comment feature designed for an utterly
different purpose: our prototype allowed students to
amend their questions with additional clarification of
their problem. But, due to an implementation error
all students were able to amend all posted questions,
effectively transforming the feature into a comment-
ing system. Making use of this error, students tended
to negatively comment on “non-sensical” questions
rather than voting them down. This could manifest
into an extend close to mobbing the initial question
poster to revoke
8
their question. On a more posi-
tive side, the “commenting system” was intensively
utilised by idle students to try to explain and present
their understanding of solutions and providing an-
swers. This in turn lead to even higher grade up-vote
results, as well discussed but still unresolved ques-
tions would tend to lead the ranks of “worthy” ques-
tions imperatively mandating answers by the docent.
For the idle students, who in general are the better
among the group, this provided a stage to test them-
selves by attempting to help others without exposing
themselves as strivers or nerds. Once again, the aspect
of self-regulating communities supported the system
as disturbers were swiftly engaged by the other stu-
dents.
As seen, our investigations show that it is not ad-
visable to strictly separate Q&A System aspects from
Panel aspects.
7
http://stackoverflow.com/
– accessed 25 March 2015
8
Students were able to revoke their question at any time.
Revoked questions were deleted from the system, making
them also inaccessible to the docent.
4 RESULTS AND FUTURE WORK
Our comparison for the investigated tools show only
few new noteworthy aspects findings when applied to
a course curriculum consisting of readings and tuto-
rials. However, when considering both settings sepa-
rately with respect to the investigated tools and their
combination, several remarkable aspects could be ob-
served. We wish to focus on two results here,
namely “Panelisation of Q&A Systems in tutorials,
and readings and tutorials complementing eachother
in their utilisation of tools.
The attendants of tutorials acting as a self-
regulating (online) community within the tool kit pro-
totypes was unexpected and needs to be investigated
further. This implies that less “community manage-
ment” effort is required from the docent, due to the
students’ situation awareness. Nevertheless, the ex-
tent of attentiveness for such “community tasks” in
parallel to the actual tutorial activities needs to be
fathomed.
Our design decision to utilise separate tools for
readings and tutorials was founded in the well estab-
lished Auditorium/AMCS for readings on one hand,
and the desire for fast prototyping for the tutorials on
the other hand, allowing week-to-week incorporation
of subject (student) feedback. This decision proved to
be poor, as this approach not only confused students
and docents, but also discouraged both from using the
tool kits. In future, we will research the effect of pro-
viding readings and tutorials using a single tool kit for
both, readings and tutorials, by assimilating the best
aspects of our ETTK prototype into AMCS. However,
at the same time both application settings must be fur-
ther investigated since their different contexts require
different tool and/or system aspects. The usefulness
of certain features remains dependant on the context
of the application setting, so our system will need
to adapt adequately. As our current systems can be
utilised together with other e-learning tools or in com-
bination with MOOCs, we wish to investigate the im-
pact of our combined single system on those, as well.
Especially as those e-learning tools and MOOCs often
incorporate isolated applications specific to readings
or other formats.
ACKNOWLEDGEMENTS
The authors wish to thank Hermann K¨orndle (owner
of the Chair of Learning and Instruction, TUD) for his
valued contributions that made our research possible,
Eric Schoop (owner of the Chair of Wirtschaftsinfor-
matik Information Management, TUD), as well as
ComparingTool-supportedLectureReadingsandExerciseTutorialsinClassicUniversitySettings
251
Lars Beier, Sebastian Herrlich, Mathias Kaufmann,
Tommy Kubica, Martin Weißbach, and Huangzhou
Wu (graduatestudents at TUD) for their programming
efforts.
REFERENCES
Beatty, I. D., Gerace, W. J., Leonard, W. J., and Dufresne,
R. J. (2006). Designing effective questions for class-
room response system teaching. American Journal of
Physics, 74(1):31–39.
Beier, L. (2014). Evaluating the Use of Gamification in
Higher Education to Improve Students Engagement.
Diploma thesis, Technische Universit¨at Dresden.
Beier, L., Braun, I., and Hara, T. (2014). auditorium - Frage,
Diskutiere und Teile Dein Wissen! In GeNeMe 2014
- Gemeinschaften in Neuen Medien. GeNeMe.
Brady, M., Seli, H., and Rosenthal, J. (2013). “clickers”
and metacognition: A quasi-experimental compara-
tive study about metacognitive self-regulation and use
of electronic feedback devices. Computers & Educa-
tion, 65:56–63.
Caldwell, J. E. (2007). Clickers in the large classroom:
Current research and best-practice tips. CBE-Life Sci-
ences Education, 6(1):9–20.
Duncan, D. (2006). Clickers: A New Teaching Aid with Ex-
ceptional Promise. The Astronomy Education Review,
5(I):70–88.
Feiten, L., Weber, K., and Becker, B. (2013). Smile: Smart-
phones in der lehre ein r¨uck- und
¨
Uberblick. IN-
FORMATIK, P(220):255–269.
Kapp, F., , Braun, I., and K¨orndle, H. (2014a). Aktive
Beteiligung Studierender in der Vorlesung durch
den Einsatz mobiler Endger¨ate mit Hilfe des Au-
ditorium Mobile Classroom Services (AMCS). In
Symposium auf dem 49. Kongress der Deutschen
Gesellschaft f¨ur Psychologie; Verbesserung
von Hochschullehre: Beitr¨age der p¨adagogisch-
psychologischen Forschung. E. Seifried, C. Eckert, B.
Spinath & K.-P. Wild (Chairs).
Kapp, F., Braun, I., K¨orndle, H., and Schill, A. (2014b).
Metacognitive Support in University Lectures Pro-
vided via Mobile Devices. In INSTICC; Proceedings
of CSEDU 2014.
Kapp, F., Damnik, G., Braun, I., and K¨orndle, H. (2014c).
AMCS: a tool to support SRL in university lectures
based on information from learning tasks. In Sum-
merschool Dresden 2014.
Lantz, M. E. (2010). The use of clickers in the classroom:
Teaching innovation or merely an amusing novelty?
Computers in Human Behavior, 26(4):556–561.
Mayer, R. E., Stull, A., DeLeeuw, K., Almeroth, K., Bim-
ber, B., Chun, D., Bulger, M., Campbell, J., Knight,
A., and Zhang, H. (2009). Clickers in college class-
rooms: Fostering learning with questioning methods
in large lecture classes. Contemporary Educational
Psychology, 34(1):51–57.
Mazur, E. (1997). Peer Instruction: A User’s Manual. Pren-
tice Hall, Upper Saddle River, NJ, series in educa-
tional innovation edition.
Moss, K. and Crowley, M. (2011). Effective learning in
science: The use of personal response systems with
a wide range of audiences. Computers & Education,
56(1):36–43.
Prather, E. E. and Brissenden, G. (2009). Clickers as data
gathering tools and students attitudes, motivations,
and beliefs on their use in this application. Astronomy
Education Review, 8(1).
Seel, N. M. (2003). Psychologie des Lernens: Lehrbuch
f¨ur P¨adagogen und Psychologen, volume 8198. UTB,
M¨unchen, 2 edition.
Weber, K. and Becker, B. (2013). Formative Evaluation
des mobilen Classroom-Response-Systems SMILE.
E-Learning zwischen Vision und Alltag (GMW2013
eLearning).
APPENDIX
All tools and tool kits we utilised were web-based and
operated from standard web servers at our alma mater
in Dresden, SN, Germany and in Saint Louis, MO,
USA. Access was made possible for web browsers
as well as dedicated iPhone and Android apps for
AMCS, and took advantage of socket-based bidirec-
tional real time communication.
CSEDU2015-7thInternationalConferenceonComputerSupportedEducation
252