Using Non-graded Formative Online Exercises to Increase the
Students’ Motivation and Performance in Classroom
A Longitudinal Study from an Undergraduate Information Systems Program in
Singapore
Ilse Baumgartner
School of Information Systems, Singapore Management University, 80 Stamford Road, Singapore, Singapore
Keywords: Online Assessment Tools, Non-graded Formative Assessments, Increase Students’ Motivation and
Performance.
Abstract: In a seven terms (i.e., three and a half years) longitudinal study the author has examined how the use
(respectively non-use) of interactive non-graded and formative online exercises has impacted the students’
attention and motivation in the classroom and, consequently, their level of performance in graded course
assessments. This practice paper describes an interactive online system for non-graded assessments which
was conceptualised, designed and implemented at the School of Information Systems, Singapore
Management University, and this paper presents an analysis of students’ performance data gathered in a
large compulsory senior-level course, particularly focusing on the comparison of the system users’
performance with the system non-users’ performance in selected graded assessment components.
1 INTRODUCTION
This paper reports on the results of a three and a half
years longitudinal study in which the author of the
paper has examined how the use (respectively non-
use) of non-graded formative online assessment
exercises has impacted students’ motivation and
performance level in class.
The paper unfolds the following manner.
In chapter two of this paper, the author
undertakes a brief review of the role of formative
non-graded assessments in higher education. This
review shows that – contrary to graded assessments
– the question on how non-graded assessments can
be used to enhance students’ performance in class
has been rarely discussed in the higher education
literature.
Chapter three discusses the use of online
assessment tools, systems and applications in higher
education and examines the role of information
technology in the space of higher education
assessments.
In chapter four, the web-based system for non-
graded formative classroom exercises (FACE)
designed and implemented at the School of
Information Systems, Singapore Management
University, is introduced. The chapter briefly
introduces the most important features of FACE and
also briefly describes a specific application example
of the FACE system.
Chapter five presents a brief analysis of the data
which was collected over the period of three and a
half years. This chapter not only examines the
performance data in FACE exercises, but also
conducts an analysis of the correlations between the
use of the FACE application and the students’
performance in selected graded assessment
components in the course.
Chapter six concludes with a brief reflection on
the usefulness and effectiveness of formative non-
graded online exercises in undergraduate programs
and makes suggestions on how such exercises could
be used to enhance students’ motivation, captivate
students’ attention, and, consequently, to raise
students’ performance level in class.
497
Baumgartner I..
Using Non-graded Formative Online Exercises to Increase the Students’ Motivation and Performance in Classroom - A Longitudinal Study from an
Undergraduate Information Systems Program in Singapore.
DOI: 10.5220/0004381504970502
In Proceedings of the 5th International Conference on Computer Supported Education (CSEDU-2013), pages 497-502
ISBN: 978-989-8565-53-2
Copyright
c
2013 SCITEPRESS (Science and Technology Publications, Lda.)
2 THE ROLE OF FORMATIVE
NON-GRADED ASSESSMENTS
IN HIGHER EDUCATION
While there is an abundance of research concerned
with the role of graded assessments and exercises in
higher education (Black and Wiliam, 1998);
(Cowan, 2002); (Nicol et al., 2005); (Rust et al.,
2003); (Taras, 2002); (Zvacek, 1999), only little
attention has been devoted to the role of non-graded
assessments (Anthony and Raymond, 2004).
Moreover, it seems that there is no clear
understanding on how summative, formative and
self-assessments relate to each other (Taras, 2008b).
Some of the research investigating the
effectiveness of non-graded assessments claims that
there is a lack of logic in the argument that working
in a non-graded context is the best way for students
to build up knowledge for formal, graded assessment
(Duvall, 1994); (Taras, 2005); (Warren, 1998).
Consequently, the non-graded exercises are unlikely
to receive the same attention from the students as the
graded assessments (Taras, 2008a). Thus, some
research argues that – if understanding the graded
assessment component is the ultimate goal – then it
would be more effective if these were the point of
focus and not the non-graded exercises (Anthony
and Raymond, 2004).
Selected research, however, shows that all too
frequent graded assessments have a “negative
impact on motivation for learning that militates
against preparation for lifelong learning” (Harlen
and Crick, 2003) – this way arguing that non-graded
assessment exercises would actually help to achieve
the opposite effect.
Some research work seems also to suggest that
graded assessments force students to focus on
performance rather than learning (Grant and Dweck,
2003). This would, in turn, imply that non-graded
assessments support students in focusing on the
subject mastery and on the learning process instead
of thinking of “passing the course”.
3 ONLINE ASSESSMENT TOOLS
IN HIGHER EDUCATION
The use of web-based interactive learning and
teaching tools has increased tremendously over the
past decade. There are numerous commercial and
open-source learning and teaching management
applications currently available on the market
(Blackboard, Desire2Learn, WebCT, Moodle and
others).
Recent research has shown that online-based
teaching and learning tools may improve students’
learning and performance in higher education
courses. Selected research has examined the use of
interactive, computer-based assessment tools for the
purposes of student practice and feedback and found
a significant performance difference between
students using the computerised practice tests and
those who did not (Gretes and Green, 2000). The use
of computer-based assessment and practice tools
also seems to have a positive impact on students’
motivation (Thelwell, 2000).
In addition, research has found that online-based
or computer-based teaching and assessment tools are
also attractive to the teaching personnel as they
increase interactivity in the classroom and make
assessments and activities more attractive to students
(Wolsey, 2008).
Most recently, the term eAssessment has been
adopted in the higher education research. Pachler
(Pachler et al., 2010) employs the term formative e-
assessment which he has defined as “the use of ICT
to support the iterative process of gathering and
analyzing information about student learning by
teachers as well as learners and of evaluating it in
relation to prior achievement and attainment of
intended, as well as unintended learning outcomes”
(p. 716). This definition stresses the important role
of information technology in the space of higher
education assessments and can be applied to graded
and non-graded assessments equally.
4 INTRODUCTION TO “FACE”
4.1 The Origins of FACE
The web-based system for non-graded formative
classroom exercises (FACE) was fully designed and
implemented at the School of Information Systems,
Singapore Management University.
When evaluating open-source as well as licensed
products currently available on the market, the
author of the paper specifically focused on the
following requirements:
1) Ability to integrate with university-internal
course, curriculum and personnel management
systems
2) Ability to facilitate individual use of the system
across different groups (i.e., sections) of one and
the same course
CSEDU2013-5thInternationalConferenceonComputerSupportedEducation
498
3) Ability to work with flexible and configurable
exercise templates
4) Ability to collect and use student performance
data and to monitor students’ performance
during the course
5) Ability to publish and un-publish exercises for
particular weeks
As none of the evaluated systems fully satisfied the
requirements, the author has decided for an in-house
implementation of the system.
4.2 Functionalities of FACE
The FACE application is a web-based system which
has been designed using Microsoft ASP.NET and
Microsoft SQL Server technologies. The FACE
application uses Windows credentials and Enterprise
Single-Sign-On to authenticate the users. Moreover,
the FACE application is tightly integrated with a
university-internal course management system.
The FACE application has two main interfaces:
one interface is exposed to students (i.e., the
practicing interface) and one interface is exposed to
the teaching personnel (i.e., the administrative
interface).
The administrative interface provides the
teaching personnel with the following major
functionalities:
1) Setting up formative non-graded exercises for a
given session based on a range of pre-defined
exercise templates
2) Determining the desired number of exercises per
session
3) Manually opening and closing the exercises or
setting a specific date and time range when the
exercise can be accessed
4) Configuring the correct solutions for a given
exercise
5) Determining the number of attempts after which
the students will be able to access the correct
solutions for the given exercise
6) Examining the results of the exercises using
graphical data analysis tools
The practicing interface exposes to the students
the following major features:
1) Accessing the formative non-graded exercises
opened for a given session and executing those
exercises
2) Accessing the correct solution for a given
exercise after the pre-set number of attempts
3) Monitoring the own performance across several
sessions
The exercises in FACE are non-graded, they are
formative in nature and there is no specific time
limit enforced to complete a certain exercise. Upon
saving the selected set of responses, the system
notifies the student if the exercise has been solved
correctly. Once a certain number of attempts for a
given exercise has been reached, the students can (if
they chose to do so) access the correct solution for
that exercise. Otherwise, the students may re-attempt
the exercise until they achieve the correct solution
themselves without consulting the solution
repository.
4.3 Use of FACE in the Enterprise Web
Solutions Course
4.3.1 The Enterprise Web Solutions Course
The Enterprise Web Solutions course is a large
compulsory third year course at the School of
Information Systems, Singapore Management
University. This course focuses on the enterprise
portal technologies and it exposes the students to the
complete life cycle of an enterprise portal in an
organisation.
The course is run both academic terms – in term
1 (August to December) and in term 2 (January to
May). In term 1, 160 students in total are taking this
compulsory course, in term 2, 80 students are taking
this course. Out of the 160 students in the first term,
80 students have been constantly using the FACE
application, out of 80 students in the second term, all
80 students have been using the system.
4.3.2 Deployment and Productive use of the
System
The initial deployment of FACE took place in
January 2010.
To enable long-term comparison, the exercises
set up for the Enterprise Web Solutions course were
similar across all sections of one particular term, and
the exercises were similar across different terms,
too. To achieve this similarity, consistently the same
exercise templates were employed, and similar and
comparable content was used to “feed” the exercise
templates.
All formative assessment exercises set up in the
FACE application were targeted at students’ self-
reflection and self-testing. Most importantly, the
exercises did not require the students to memorise
facts, to copy answers from the given lecture
material, or to guess the correct answers.
Rather than that, the exercises encouraged the
students to seek for the underlying meanings, to
explore relationships between different concepts, or
UsingNon-gradedFormativeOnlineExercisestoIncreasetheStudents'MotivationandPerformanceinClassroom-A
LongitudinalStudyfromanUndergraduateInformationSystemsPrograminSingapore
499
to compare advantages and disadvantages of specific
approaches. In addition to that, one of the most
essential intrinsic values of those exercises was their
non-graded nature.
This means, that the students were able to
complete those exercises without any fears of
earning a bad mark or negatively impacting their
score for the course.
5 DATA ANALYSIS
5.1 Data Collection
The FACE system collects diverse quantitative data
on students’ performance in the exercises.
Firstly, the system captures the number of
attempts which a particular student needs to
complete a given exercise. Secondly, for a given
attempt of a particular exercise, the system captures
the “correctness” or “incorrectness” not only for the
exercise as a whole but also for every individual
response within a given exercise. Thirdly, the system
captures the time elapsed between a particular
student’s attempts of the same exercise.
This quantitative data captured within the system
is used in several ways.
During the course, the teaching personnel can
use charts and other visualisation means produced
by the application, to display to students the overall
class performance in terms of attempts needed to
complete the exercise. The system also uses this data
to provide the students with an immediate feedback
concerning the correctness of their individual
solution. In addition to that, the system allows the
students to monitor their own performance across
several sections of the course – e.g., monitoring how
many attempts they need to solve a given exercise,
how this number has changed over time. Moreover,
the data is used by the teaching personnel to capture
the most frequently made mistakes in a given
exercise – and those problematic cases are usually
selected as topics when carrying out the FACE
exercise review at the beginning of each subsequent
class.
In order to enrich the insights delivered through
the quantitative data, the author of the paper has also
conducted three informal student focus groups on
the use of the FACE application. The first focus
group was conducted one month after the
introduction of the tool in the course. This focus
group was primarily concerned with discussing with
students their perceptions as to the use of the tool
itself – the accessibility of the system, friendliness of
use, any desired new features etc. The second focus
group was conducted at the end of the first year of
the system’s use. This discussion particularly
focused on the students’ perception of the usefulness
of this tool for their understanding of the course
concepts. The third focus group was conducted at
the end of the second year of the system’s use and it
was primarily concerned with understanding how
the use of the FACE system is correlated with the
students’ performance in other course assessments.
5.2 Data Analysis
Although the data collected by the FACE application
was used for different purposes, the most interesting
findings were generated when analysing the
correlation between the use (respectively non-use) of
the FACE application and the students’ performance
in other assessment components of the course (the
paper uses the data of one sample graded assessment
of the course, namely, quizzes, and examines how
the use (respectively non-use) of the FACE
applications impacted students’ performance in this
particular assessment).
The performance data was analysed along three
different dimensions: average time needed to
complete the exercises, average number of attempts
needed to complete the exercises, and average
number of errors per attempt. Interestingly, while
there is a clear positive change in students’
performance within a particular term (e.g., the
students need far more time to complete the exercise
at the beginning of the term and considerably less at
the end of the term), there are no changes in the
performance across different terms – the pattern
stays the same across all the examined terms.
The similar pattern across all examined academic
terms is, however, easily explainable.
Due to the almost non-existent exposure of
students to similar formative non-graded exercises in
other courses, the students seemed to have
considerable difficulties in developing the
appropriate attitude to such exercises. Moreover, the
students doubtlessly needed some time to develop
adequate reasoning and evaluation skills – as the
exercises primarily required the students “to look
beyond the scenes” instead of memorising and
reproducing some given facts.
The consistent pattern across all four terms under
examination clearly suggests that non-graded
formative exercises can considerably contribute to
the development of such skills. With a comparably
small time investment during the class (the
Enterprise Web Solutions course devoted
CSEDU2013-5thInternationalConferenceonComputerSupportedEducation
500
approximately 15 minutes to the completion of the
exercises and additional 10 minutes to the discussion
of the previous week’s exercises), the students
appeared to considerably improve in their abilities to
quickly evaluate given concepts, assess the
relationships among those concepts, to reflect on
different aspects of those concepts (disadvantages
vs. advantages, pros vs. cons), or to establish logical
combinations of those concepts.
Further development of these abilities, in turn,
seemed to have positive influence on the students’
performance in other course assessments.
A comparison of the performance level in the
quiz assessment for the students who used the FACE
application versus the students who did not use the
FACE application (considering the time period of
the entire study reported in this paper – from term 1,
2009 to term 1, 20012) support this assumption.
The quiz assessment is conducted three times per
term. The first quiz is conducted in week 4, the
second quiz in week 9, and the third quiz in week
12. The complexity of the quizzes increases from
assessment to assessment: while the first quiz has 5
questions and 5 minutes allocated to it, the next quiz
has 6 questions (to be completed in 6 minutes), and
the last quiz has 7 questions which need to be
finished within 7 minutes. The quizzes conducted in
the Enterprise Web Solutions course are a
combination of two types of questions – multi-select
questions or short-answer questions. Contrary to
typical single-choice or multiple-choice quizzes, the
quizzes conducted in this particular course are not
targeted at memorisation and reproduction of facts
or data. Rather than that, the students need to
evaluate the plausibility of given statements, assess
the possibility of combining different options,
exclude or include various alternatives.
While the difference in the maximum marks in
this particular assignment is small (FACE users vs.
FACE non-users), very interesting is the pattern
concerning the minimum mark achieved in this
assessment.
As indicated above, the first quiz is conducted in
week 4 (the FACE exercises start in week 2).
Consequently, there is not much opportunity for
students to practice on the non-graded exercises until
the first quiz. Thus, the impact of those exercises on
the actual assessment seems to be low. For quizzes
conducted in weeks 9 and 12, however, the impact
seems to steadily increase as the minimum mark
(and, consequently, the average mark) for those
assessments for students using the FACE application
is consistently higher than for those students who do
not use FACE (e.g., considering all terms under
examination, for quiz 2, the non-users of the FACE
system achieved the minimum mark of 2.0, but the
FACE users the minimum mark of 3.8, for quiz 3,
the non-users had the minimum mark of 2.0, the
users the minimum mark of 4.1 out of 10).
To obtain some qualitative data supporting the
results of the quantitative data analysis, three
information focus groups were conducted with
students of the course.
While the first focus group was primarily
focusing on the design aspects of the tool (and led to
several changes in the layout of the practicing and
student-facing interface), the most important insights
concerning the perceived usefulness of the tool from
the students’ perspective was gained through the
second and third focus group.
One of the most frequently discussed aspects was
the students’ difficulties in getting “used” to the
nature of the tool.
One of the participators of the focus group 2
formulated this aspect in the following way:
The most difficult thing here is to get used to
the fact that the exercises are non-graded.
Personally, I did not take them seriously at
the beginning. I thought: I do not get a mark
for this thing, so why should I be doing it?
Another student of the same focus group added:
For me, it was not taking it seriously or not. I
was actually afraid of doing them. I was
afraid of making mistakes. I thought that
somehow it will impact my grade for the
course. So it took for me a long time to see
that nothing bad happens if I make a mistake.
That I can start over again and try to fix it.
Most students clearly confirmed the fact that
there was a considerable effort involved in getting
used to the nature of the exercises and accepting the
fact that those exercises are neither graded, nor taken
into account when assigning the final mark for the
course.
However, it seems that most students –
particularly later in the course – started to appreciate
the non-graded and formative character of the
exercises and feedback given to those exercises. As
one of the participants of the third focus group
noted:
Towards the end of the course, I started to
feel so good about those exercises. The main
thing was that the pressure was gone. I knew
that I have the freedom in doing them the way
I like. Making mistakes … trying out …
looking for the right answers … yes, thinking
about them.
UsingNon-gradedFormativeOnlineExercisestoIncreasetheStudents'MotivationandPerformanceinClassroom-A
LongitudinalStudyfromanUndergraduateInformationSystemsPrograminSingapore
501
Moreover, the students pointed out to the fact
that those exercises were particularly useful because
of their online availability. In fact, most of the
students stated that they continued to look at those
exercises in their time outside the actual class, too,
and that they used those exercises to prepare for the
final exam of the course.
Additional features appreciated by students was
the immediate feedback which the system returned
upon each of the attempts, the possibility of
monitoring own performance across several sections
of the course and the accessibility of the correct
answers after a specific number of attempts
(although most students indicated during the focus
groups that the temptation to access the correct
solutions as soon as they got available considerably
decreased over time – instead, most students
attempted to finish the exercise without consulting
the solution repository). Moreover, most students
also considered the “post-exercise” review done
during the subsequent class very useful for their
understanding of the concepts and topics covered in
class.
6 CONCLUSIONS
The current study has shown that non-graded
formative online exercises have the potential to
considerably improve students’ understanding of
complex concepts and their underlying meanings.
The study has also demonstrated that such exercises
are helpful in emphasizing learning and reinforcing
important concepts covered in a course, and they
also may be instrumental in increasing students’
motivation and engagement in class.
Although the current study has been carried out
in the context of Information Systems education, the
insights gained through this study appear to be
applicable to any higher education program which is
motivated to provide students with a greater choice
and ownership in their learning and which is
determined to emphasize student-centered and
student-focused teaching and learning.
REFERENCES
Anthony, R. N. & Raymond, L. A. 2004. How Reliable
Are Our Assessment Data?: A Comparison of the
Reliability of Data Produced in Graded and Un-
Graded Conditions. Research in Higher Education, 45,
921-929.
Black, P. & Wiliam., D. 1998. Assessment and classroom
learning. Assessment in Education: Principles, Policy
and Practice, 5, 7-74.
Cowan, J. 2002. Plus/minus marking: A method of
assessment worth considering?, York, UK, The
Higher Education Academy.
Duvall, B. 1994. Obtaining student cooperation for
assessment. New Directions for Community Colleges,
88, 47–52.
Grant, H. & Dweck, C. S. 2003. Clarifying achievement
goals and their impact. Journal of Personality and
Social Psychology, 85.
Gretes, J. A. & Green, M. 2000. Improving undergraduate
learning with computer-assisted assessment. Journal
of Research on Computing in Education, 33, 46-54.
Harlen, W. & Crick, R. D. 2003. Testing and motivation
for learning. Assessment in Education, 10, 169-207.
Nicol, D. J., Macfarlane-Dick, D. & ., N. 2005. Formative
assessment and self-regulated learning: A model and
seven principles of good feedback practice. Studies in
Higher Education, 31, 199-218.
Pachler, N., Daly, C., Mor, Y. & Mellar, H. 2010.
Formative e-assessment: Practitioner cases Computers
& Education, 54, 715-721.
Rust, C., Price, M. & O’donovan, B. 2003. Improving
students’ learning by developing their understanding
of assessment criteria and processes. Assessment and
Evaluation in Higher Education, 28, 147-164.
Taras, M. 2002. Using assessment for learning and
learning from assessment. Assessment and Evaluation
in Higher Education, 27, 501-510.
Taras, M. 2005. Assessment - summative and formative -
some theoretical reflections. British Journal of
Educational Studies, 53, 466-478.
Taras, M. 2008a. Issues of power and equity in two
models of self assessment. Teaching in Higher
Education, 13, 81-92.
Taras, M. 2008b. Summative and formative assessment.
Perceptions and realities.
Active Learning in Higher
Education, 9, 172-192.
Thelwell, M. 2000. Computer-based assessment: a
versatile educational tool. Computers and Education,
34, 37-49.
Warren, J. 1998. Cognitive measures in assessing learning.
New Directions for Institutional Research in Higher
Education, [No. 59, Implementing Outcomes
Assessment: Promise and Perils] 1, 29–39.
Wolsey, T. 2008. Efficacy of instructor feedback on
written work in an online program. International
Journal on ELearning, 7, 311–329.
Zvacek, S. M. 1999. What’s my grade? Assessing learner
progress. TechTrends, 43, 39-43.
CSEDU2013-5thInternationalConferenceonComputerSupportedEducation
502