A Technology Enhanced Assessment System for Skill and Knowledge
Learning
Enosha Hettiarachchi
1
, Enric Mor
2
, Maria Antonia Huertas
2
and M. Elena Rodriguez
2
1
Internet Interdisciplinary Institute (IN3), Universitat Oberta de Catalunya, Barcelona, Spain
2
Computer Science, Multimedia and Telecommunication Studies, Universitat Oberta de Catalunya, Barcelona, Spain
Keywords: e-Assessment, Higher-order Skills Acquisition, Knowledge Acquisition, Interactivity, Online Higher
Education, Technology-enhanced Assessment.
Abstract: This paper presents a technology-enhanced assessment system that can be used for both skill and knowledge
learning. For this purpose, a general technology-enhanced assessment system is designed and developed for
an online logic course at a fully online university, taking into account e-learning standards and
specifications, which can be easily adapted to any institute and subject requiring a high level of skill
learning. Through this system, both learning and formative assessment facilities are provided to students. To
evaluate its effects on student learning process, the system is applied in a real logic course at the Universitat
Oberta de Catalunya (UOC). Based on the evaluation, it shows that students’ are more engaged with the
system and, as a result, their performance in the subject had improved. Also, based on the feedback obtained
through the evaluation, it shows that students are satisfied with the facilities and assessments provided by
the system. Overall, the introduction of the technology-enhanced assessment system for skill and knowledge
learning has yielded some interesting results.
1 INTRODUCTION
Nowadays, e-learning systems have become a
common form of media for providing education
transparency in society (Krishnamurthy &
O'Connor, 2013). E-Learning systems that provide
higher interactivity and feedback makes students
motivated and engaged to learn more efficiently
(Bull & Mckenna, 2004; Sadler, 2013). As a result,
new challenges have arisen for educators and
technologists and, among them, providing student
engagement and assessment can be noted.
Technology-enhanced assessment, also known as
e-assessment, deals with methodologies, tools and
processes where information and communication
technologies are used for the delivery of assessment
activities, the recording of responses and the
provision of feedback (Cook & Jenkins, 2010; Daly
et al., 2010; JISC, 2007). Traditionally, e-assessment
has been used for testing the acquisition of
declarative knowledge where students are required
to select a predetermined response based on factual
recall like, for example, the simple multiple-choice
question types (Bull & Mckenna, 2004; de Bruyn et
al., 2011). However, cognitive skills where students
have to apply their analytic, creative and
constructive skills cannot be assessed via multiple-
choice tests and equivalent forms (Gruttmann et al.,
2008; Majchrzak & Usener, 2011). As Crisp (2009,
2010) stated, in order to test higher order
capabilities, it is needed to design sophisticated
assessment tasks, but the difficulty and workload in
designing such tasks are considerable. Skill e-
assessment when offered is, usually, subject
dependent and technologically complex because of
the computational difficulties to represent and
simulate higher order cognitive questions and its
automatic marking. Hence, most of the technology-
enhanced assessment tools that are currently
available are either developed specifically for the
particular subject content or only offer a simple type
of questions that can be used only for the assessment
of knowledge acquisition. Therefore, one of the
main disadvantages is that there is no general type of
tool that can be used for assessment of both skill and
knowledge activities (Hettiarachchi, et.al, 2013)
Another challenge is associated with the software
used for e-assessment systems. According to Bull &
Mckenna (2004), several issues have been identified
as critical when it comes to the decision-making
process. They can be noted as interoperability,
184
Hettiarachchi E., Mor E., Antonia Huertas M. and Elena Rodríguez M..
A Technology Enhanced Assessment System for Skill and Knowledge Learning.
DOI: 10.5220/0004845601840191
In Proceedings of the 6th International Conference on Computer Supported Education (CSEDU-2014), pages 184-191
ISBN: 978-989-758-021-5
Copyright
c
2014 SCITEPRESS (Science and Technology Publications, Lda.)
integration with existing systems, scalability,
performance level, limitations associated with
upgrading procedures, support and maintenance and
security and accessibility.
Considering the above, this paper introduces a
technology-enhanced assessment system, which
goes beyond just the combination of existing
assessment systems, for both skill and knowledge
learning and assessment in an online educational
environment. In addition to that, the system is
designed and developed as a standardized and a
general system that can be easily adapted by any
domain or subject. The goal of this paper is to
evaluate whether the developed e-assessment system
works correctly and this was evaluated through a
real online Logic course at a fully online university.
The rest of the paper is organized as follows.
Section 2 presents a general introduction to the skill
and knowledge assessment with some of the
common e-assessment tools and systems. Section 3
explains the proposed technology-enhanced
assessment system. Section 4 presents the data
analysis and results based on the evaluation of the
system. Finally, the paper ends with the conclusions.
2 SKILL AND KNOWLEDGE
E-ASSESSMENT
Assessment activities can be divided broadly into
two types such as skill and knowledge assessment
(Crisp, 2009). Knowledge can be specified as the
recall or recognition of specific items. It can be more
elaborate as remembering of previously learned
materials and contents (Bull & Mckenna, 2004; de
Bruyn et al., 2011). This may involve the recall of a
wide range of content, from specific facts to
complete theories, but all that is required is the
bringing to mind of the appropriate information.
Knowledge e-assessment mostly uses simple forms
of questions such as Multiple Choice Questions
(MCQ), multiple responses, short answers and fill-in
the blanks. They are generally easier to mark both as
automatic and human means. This type of
assessment is quicker in delivery, gives more
specific and directed feedback to individuals and can
also provide greater curricular coverage (McAlpine,
2002). At the same time, they can be limited in
scope and can occasionally degenerate into a ‘quiz’
of facts about the area of study.
Skill can be defined literally as a practiced
ability, expertness, technique, craft and art. Higher-
order cognitive skills are typically required for
solving exercises encountered in the natural
sciences, including computer science and
mathematics (Gibbs & Simpson, 2004). Skill e-
assessment activities are often associated with a
constructivist view of learning and it is best suited
when there may be a difference of opinion based on
interpretation (Crisp, 2007) or to assess higher
cognitive skills. However, they can be time
consuming to set and mark. They also require
greater marking proficiency than knowledge
assessment activities, involving training markers or
detailing criteria (McAlpine, 2002).
2.1 e-Assessment Tools and Systems
The main characteristics of an e-assessment system
have been widely studied. The most important are
(Bull & Mckenna, 2004; Sitthiworachart et al., 2008;
Tselonis & Sargeant, 2007):
monitoring student progress through frequent
assessments,
applying a variety of interactive question types
and promoting student engagement,
automatic marking, weighted-average grade
calculation and immediate feedback,
supporting flexible and adaptive learning, and
personalization of assessment activities,
monitoring question quality using statistical
analysis and
reducing the potential for cheating by
randomizing questions along with timers, and
sharing questions via question banks.
Some universities and educational institutes offer e-
assessments; but they are mostly based on
knowledge assessment rather than skill assessment
activities (Marriott, 2009; Pachler et al., 2010). One
of the reasons is that most of the tools support only
simple type of the questions such as MCQ. At the
same time, tools that are based on skill assessment
activities usually depend on a specific subject
context because of their complex semantics.
Since this research is focused on a logic course,
the e-learning and e-assessment tools used for logic
were analysed. Currently, there is a quite large
sample of tools used for learning logic courses,
many of them can be categorized as Intelligent
Tutoring Systems (ITS); but not so much for e-
assessment. The main characteristic of an ITS for
learning is providing customized assistance and
feedback to students while simulating the presence
of an e-tutor, however, they lack many of the
characteristics of an e-assessment system. There is
an extensive discussion on e-assessment tools in
Crisp (2007) and on ITS for teaching logic in
ATechnologyEnhancedAssessmentSystemforSkillandKnowledgeLearning
185
(Huertas, 2011). In the case of logic, many tools fall
into the category of ITS, for example: Pandora
(Imperial College London, 2013), Organon
(Dostalova & Lang, 2007), and AELL (Huertas,
2011), but none was fulfilling all the general features
of an e-assessment system, noted above.
3 TECHNOLOGY-ENHNACED
ASSESSMENT (TEA) SYSTEM
The context of this research is a first year Logic
course of a Computer Science degree in the
Universitat Oberta de Catalunya [www.uoc.edu].
Logic is a good case study for this research because
it requires a high level of skill and knowledge
learning. In this course, an ITS for learning logic,
AELL, was developed and used previously, with the
aim of providing a tool to facilitate the learning
process (Huertas, et.al, 2011). The tool assisted
students in different kind of activities by guiding and
informing them of the correctness of their solutions.
The main aim of the ITS tool was to provide learners
with more practice through automatically graded
exercises, for learning purposes. For assessment,
students had, as is traditional, a set of assessment
activities, provided through the ITS, that were the
same for all the students. Therefore, they had the
possibility to copy answers from each other.
In order to provide a fully formative e-
assessment experience, as mentioned before, it was
needed to introduce both practice and assessment of
skill and knowledge acquisition in order to motivate
students and provide a rich e-assessment experience
while minimizing copying and cheating. In
particular, main characteristics of e-assessments, not
present in the ITS tool used for practice, had to be
introduced. Therefore, we decided to go a step
forward the use of the ITS and design a new system
to provide e-assessment and feedback in an
interactive way.
The proposed Technology-Enhanced Assessment
(TEA) system was designed and developed
iteratively following a user centered design approach
(Bevan, 2003). To identify the features and
functionalities of the system, data were collected in
the form of surveys, focus groups and interviews.
Also, interfaces were tested and prototypes were
built in order to evaluate the system and make it
match teacher’s goals and student’s learning and
assessment needs.
The TEA system was designed to provide both
practice and assessment in both skill and knowledge
acquisition. For practice, students were provided
with facilities such as learning materials and practice
tests. For assessment, the system provided
assessment tests. Both practice and assessment tests
included interactive feedback and based on the
feedback students were able to attempt the tests till
they obtain the required marks needed to master the
knowledge and skills required. For assessment to be
effective, feedback must not only be provided, but
also understood by students and acted on in a timely
fashion (Jordan, 2009). Therefore, the feedback
provided through the system was immediate and
detailed with guidance. Based on that, students
should be able to interactively learn their errors or
mistakes and obtain a higher mark in the subsequent
attempt. For assessment test, some restrictions were
imposed with time and attempts to motivate students
as well as to offer individual questions and the
assessment atmosphere.
3.1 Architecture of the TEA System
In this section, the architecture for a general TEA
system, from which the case of the TEA system for
the Logic course is developed, is presented. It
consists of five modules: skill assessment module,
knowledge assessment module, progress bar,
competencies and gradebook.
Skill assessment module provides dynamic and
interactive questions for both practice and
assessment tests where students have to construct the
answers with the guidance of feedback, errors and
hints. In the case of the TEA for the Logic course,
this module uses an ITS for Logic practice. In the
case of a different subject, a tool of the type of an
ITS can be used. Knowledge assessment module
also provides both practice and assessment tests with
simple knowledge type of questions such as MCQ.
Also, for these questions, feedback is provided for
each step performed by the student. Progress bar is a
module that provides visual guidance for helping
students to understand their progress with respect to
the course. It shows the total progress obtained by
each student along with the graphical presentation of
activities completed, to be completed and not
completed. Competencies module allows teachers to
understand the competencies achieved by students in
a particular course. These competencies are selected
based on the marks obtained by students for a
particular activity or test. Students can view the
competencies they have achieved as a progress bar
and a list of tables. Gradebook module is used to
display grades and outcomes obtained by students
for each activity or test. These components help
CSEDU2014-6thInternationalConferenceonComputerSupportedEducation
186
teachers to track students learning progress
throughout the whole course period.
Out of the five modules mentioned, progress bar,
competencies and gradebook are taken as the Basic
TEA System as these are the basic functionalities of
the main general TEA system and they are not
subject dependent. In addition to that, the Basic TEA
system is capable of storing log data related to
students’ participation in the activities and statistics.
Both the knowledge assessment and skill
assessment modules are independent modules,
where the skill assessment module is, usually,
subject dependent. They are connected with the
Basic TEA system using a plug-in, developed for
this research purpose.
As shown in Figure 1, the users log into the LMS
of the educational institution and automatically
navigate to the TEA system through the single sign-
on facility provided by the IMS Learning Tools
Interoperability (LTI) specification (IMS GLC,
2013). The principal concept of LTI is to establish a
standard way of integrating rich learning
applications with platforms like learning
management systems or other educational
environments. Also, skill and knowledge assessment
modules are linked with the Basic TEA system with
the aid of the developed plug-in. For transferring
data such as user data, grades, time spent and
attempts, from the two modules to the Basic TEA
system and from the TEA system to the LMS,
OAuth protocol (OAuth, 2013) is used together with
the IMS LTI specification. This protocol is used to
secure its message interactions between the tools.
The connection and the communication between
tools are carried-out through both message-based
and service-based connections (Hettiarachchi et al.,
2012).
In the case of the Logic course, an MCQ module
was used as the knowledge assessment and our
previously developed Logic ITS tool, enhanced into
an assessment tool with the features of incorporating
a large database of questions based on different
difficulty levels, randomized selection of questions,
immediate and detailed feedback, limited time and
limited attempts, was used for the skill assessment.
Since both the skill and knowledge assessment
modules are independent modules, instead of these
tools, any other tool can be taken and easily
connected with the Basic TEA system using the
developed plug-in in a secure and interoperable
manner. Therefore, depending on the context, any
other tool can be used.
4 EVALUATION
The goal of this paper is to evaluate whether the
developed e-assessment system works correctly.
Also, the impact of the system in students’
performance and engagement in the classroom was
evaluated.
To evaluate the proposed TEA system and
analyse its impact on students, the system was used
in a Logic course of the Universitat Oberta de
Catalunya. The course duration was 14 weeks with
the participation of 38 students.
Figure 1: Architecture of the system with main components of the TEA system.
ATechnologyEnhancedAssessmentSystemforSkillandKnowledgeLearning
187
For the evaluation, a formative assessment model
was introduced into the Logic course for both skill
and knowledge assessment. For the formative
assessment, students were provided with both
practice and assessment activities through the
system. To motivate students to practice with the
interactive questions and feedback, a restriction was
imposed as students needed to obtain a minimum
pass mark in the practice activities to move to
assessment activities. Questions within the
assessment activities were selected randomly from a
large question bank to minimize cheating. Also,
soon after completion of a particular test, students
were offered with immediate detailed feedback.
Also, students were given a face-to-face 2 hour final
examination. The final grade of the course
comprised of 35% of marks in the formative
assessment and 65% of marks in the summative
assessment.
4.1 Analysis and Results
Data were collected mainly from two sources. On
the one hand, data related to student engagement
with the TEA system were obtained through the
system log. On the other hand, a questionnaire was
given to the students to obtain their feedback
regarding the learning experience with the TEA
system. This questionnaire was also used to obtain
students' perceptions about the system, the
improvements needed to be carried-out in the future,
and also to draw conclusions regarding the student
experience with the system. The questionnaire
consisted of 28 questions consisting of open-ended,
yes/no and five-point Likert scale questions. These
questions were divided into four sections such as
learner information, student satisfaction, formative
assessment and assessment model.
To analyse students’ engagement, data were
obtained from the system logs about student
participation using the TEA system. The results of
the analysis are shown in Figure 2. According to
that, each student accessed the system minimum 5
times during a particular day. The TEA system
consisted of session time-out duration and therefore,
students might have had to login to the system more
than once during the day. This could be shown as the
highest peaks in the diagram. The majority of these
peaks had occurred when it was closed the deadlines
of the assessment activities (AA corresponds to
Assessment Activities).
High peaks at the beginning shows that, students
have used the TEA system more to get familiar with
its features. Overall, students have continuously
used the TEA system throughout the whole duration
of the course. This could be due to the fact that
students had appreciated the facilities, such as
interactivity, immediate feedback and marks,
provided by the TEA system for practice purposes.
To explore students learning experience, a set of
questions was introduced into the questionnaire.
Regarding the student satisfaction, four questions
were given to the students. About the instructions
presented for answering the questions, 89% of
students agreed by answering they were presented in
Figure 2: Students' engagement in the TEA system.
CSEDU2014-6thInternationalConferenceonComputerSupportedEducation
188
a clear and concise manner. Also, 68% of the
students agreed that the automatic grades offered
through the system were very good. At the same
time, 89% of students were satisfied with the
questions provided for both practice and assessment.
Overall, students were satisfied with the TEA
system. For formative assessment, students’
opinions about practice, assessment, feedback and
their relationship to improving learning process were
evaluated. First, it was needed to understand whether
it was helpful to practice before attempting the
assessments and 74% of students agreed and they
further mentioned that practicing using the system
helped them to evaluate the skills and knowledge
acquired as well as they were able to practice and
get a comprehensive review of the questions offered
in the assessments. When it comes to the automatic
feedback, 89% of students agreed that feedback
provided by the TEA system was satisfactory. This
can be taken as a reason due to detailed immediate
feedback, hints and suggestions introduced in the
system. Based on the marks offered by the TEA
system, 89% students agreed that the marks fit their
knowledge and skills developed. Therefore, it can be
stated that the TEA system was capable of offering
correct marks or grades to fit the skills and
knowledge acquired by students. Also, 79% of
students considered that practice and assessment
tests provided were helpful for learning skills related
to the course. Furthermore, students also agreed that
both tests helped them to understand the topics
covered in the materials. Therefore, to find the
difficulty of the assessments, when asked about the
average number of attempts students had to
complete in-order to achieve the minimum score,
74% mentioned 2 attempts and another 11%
mentioned 3 attempts. As a conclusion an average of
2 attempts were needed to obtain the minimum
score. At the same time, it can be taken as an
indicator that the assessments were of medium
difficulty level and they are suitable for assessment
of knowledge and skills. Also, 79% of students
strongly agreed that assessment tests have helped
them to evaluate their strengths and weaknesses in
the Logic course. To get students' opinions about the
use of assessment tests in the subject, whether they
would have learned the same if they did not have
assessment tests and 89% of students answered by
saying no. Therefore, it shows that students have
valued the importance of assessments in the learning
process. When it comes to evaluating the progress of
doing tests using the progress bar, only 74% agreed,
whereas some students have mentioned, it was
useful but not essential. About the usefulness of the
competency module, 79% of students agreed by
saying it was useful, whereas the rest of the 21% did
not agree. When asked about the reasons most of
them have mentioned that they have not seen the
module since they have not been informed about it.
However, when asked about grades and outcomes,
interesting 100% agreed that both grades and
outcomes were useful information. Finally, an open-
ended question was given to obtain students
comment and suggestions about the system. Overall,
students liked the system, unless some students have
mentioned that the time given, 2 hours, for the
assessment test was not enough.
5 CONCLUSIONS
The TEA system was introduced to support the
student learning process based on both skill and
knowledge acquisition. The system was designed to
offer interactivity in e-assessments with an
architecture that favours its application to different
domains and its connection with different existing
LMS in a secure and interoperable way.
This system provided both practice and
assessment facilities for students to improve their
learning process. Therefore, students were
constantly engaged in the system and as a result,
their performance in the formative assessment and
summative assessment had improved. Also, the
information provided through the progress bar and
competencies module helped students to evaluate
their own progress. However, most of the students
have not fully utilized these features, as they were
not informed about them. Therefore, in the future,
students have to be informed at the beginning of the
course about the facilities offered through the
progress bar and the competencies module.
Student participation data in the TEA system
showed that students were constantly engaged in the
system for both practice and assessment purposes. It
also showed that students were more engaged in the
system when it was close to a completion date of an
assessment test. Also, students had used the system
even after the completion dates of the assessments.
This showed that students had used the system to
prepare for the final examination. Overall, as a
conclusion, it can be stated that students had
constantly engaged in the TEA system throughout
the course duration. Overall, students were satisfied
with the TEA system, formative assessment,
assessment model, course scheduling, marks and
feedback provided. Students were also satisfied with
the detailed and immediate feedback and they
ATechnologyEnhancedAssessmentSystemforSkillandKnowledgeLearning
189
believe that doing practice activities had helped
them to perform better in the assessments, and to
evaluate the skill and knowledge acquired. Also,
according to students, both practice and assessment
tests helped them to evaluate their strength and
weakness in the Logic subject, and learn skills
related to the subject. However, some students
mentioned that it was a bit stressful and the allocated
time was not enough to complete some of the
questions related to skills. Therefore, as
improvements it is needed to consider about the time
given for the assessments, mostly in the sections
where students have to construct the answer using
the skill assessment module. At the same time, a
complete schedule with assessments has to be
displayed in the main course page.
Although this research was carried out in a fully
online environment, the developed TEA system with
the formative assessment approach based on skills
and knowledge can be extended to blended courses.
In the future, for testing the interoperability of the
TEA system, it will need to be introduced into other
courses based on skills and knowledge as well as by
connecting with other LMS.
As a general summary, the technology-enhanced
assessment system was capable of supporting
students learning process and as a result students'
performances in the online classroom had improved.
ACKNOWLEDGEMENTS
This work has been partially supported by the
Spanish Ministry of Science and Innovation funded
Project MAVSEL (ref. TIN2010-21715-C02-02),
Computer Science, Multimedia and
Telecommunication Studies Department of the
Universitat Oberta de Catalunya (UOC) and the
Internet Interdisciplinary Institute (IN3) of the UOC.
REFERENCES
Bevan, N., 2003. UsabilityNet Methods for User Centred
Design. Human-Computer Interaction: Theory and
Practice (Part 1), Proceedings of HCI International, 1,
434 - 438.
Bull, J., & Mckenna, C., 2004. Blueprint for Computer-
Assisted Assessment (Vol. 2). RoutledgeFalmer,
London.
Cook, J., & Jenkins, V., 2010. Getting Started with E-
Assessment. Bath. Available at: <http://opus.
bath.ac.uk/17712/> [Accessed on 31 January 2014]
Crisp, G., 2007. The e-Assessment Handbook. Continuum
International Publishing Group, London.
Crisp, G., 2009. Interactive E-Assessment: Moving Beyond
Multiple-Choice Questions. Centre for Learning and
Professional Development. Adelaide: University of
Adelaide.
Crisp, G., 2010. Interactive E-Assessment - Practical
Approaches to Constructing More Sophisticated Online
Tasks. Journal of Learning Design, 3 (3), 1 - 10.
Daly, C., Pachler, N., Mor, Y., & Mellar, H., 2010.
Exploring Formative E-Assessment: Using Case
Stories and Design Patterns. Assessment & Evaluation
in Higher Education, 35 (5), 619 - 636.
de Bruyn, E., Mostert, E., & Schoor, A., 2011. Computer-
based Testing - The Ideal Tool to Assess on the
Different Levels of Bloom's Taxonomy. In 14th
International Conference on Interactive Collaborative
Learning (ICL2011), 444 - 449.
Dostalova, L., & Lang, J., 2007. ORGANON The Web
Tutor for Basic Logic Courses. Logic Journal of
IGPL, 15 (4), 305 - 311.
Gibbs, G., & Simpson, C., 2004. Conditions Under Which
Assessment Supports Students' Learning. Learning
and Teaching in Higher Education, 1 (1), 3 - 31.
Gruttmann, S., Böhm, D., & Kuchen, H., 2008. E-
assessment of Mathematical Proofs: Chances and
Challenges for Students and Tutors. International
Conference on Computer Science and Software
Engineering (IEEE), 612 - 615.
Hettiarachchi, E., Huertas, M. & Pera, E., 2013. “Skill and
Knowledge E-Assessment: A Review of the State of
the Art” [online working paper]. (Working Paper
Series; WP00-000). IN3 Working Paper Series. IN3
(UOC). In press.
Hettiarachchi, E., Huertas, M. A., Pera, E. M., &
Guerrero-Roldan, A. E., 2012. An Architecture for
Technology-Enhanced Assessment of High Level Skill
Practice. In 2012 IEEE 12th International Conference
on Advanced Learning Technologies (ICALT), 38-39.
IEEE.
Huertas, A., 2011. Ten Years of Computer-based Tutors
for Teaching Mathematical Logic 2000-2010: Lessons
Learned. In P. Blackburn et al. (Ed.): Third
International Congress on Tools for Teaching
Mathematical logic (TICTTL 2011), LNAI 6680, 131 -
140. Springer, Heidelberg.
Huertas, A., Humet, J. M., López, L., & Mor, E., 2011.
The SELL Project: A Learning Tool for E-Learning
Logic. Learning and Instruction, 6680/2011, 123 -130.
Imperial College London, 2013. Pandora IV. Available at:
<http://www.doc.ic.ac.uk/pandora/newpandora/index.
html> [Accessed on 31 January 2014]
IMS GLC, 2013. Learning Tools Interoperability
.
Available at: <http://www.imsproject.org/lti>
[Accessed on 31 January 2014]
JISC, 2007. Effective Practice with e-Assessment: An
Overview of Technologies, Policies and Practice in
Further and Higher Education. Available at:
<http://www.jisc.ac.uk/media/documents/themes/elear
ning/effpraceassess.pdf> [Accessed on 31 January
2014]
Jordan, S., 2009. Assessment for Learning : Pushing the
CSEDU2014-6thInternationalConferenceonComputerSupportedEducation
190
Boundaries of Computer-Based Assessment.
Practitioner Research in Higher Education, 3 (1), 11 -
19.
Krishnamurthy, A. and O’Connor, R., 2013. An Analysis
of the Software Development Processes of Open
Source E-Learning Systems, Systems, Software and
Services Process Improvement. Springer Berlin
Heidelberg, 60-71.
Majchrzak, T. A., & Usener, C. A., 2011. Evaluating the
Synergies of Integrating E-Assessment and Software
Testing. In Proceedings of Information Systems
Development Conference (ISD2011). Springer, 2011.
Marriott, P., 2009. Students' Evaluation of the use of
Online Summative Assessment on an Undergraduate
Financial Accounting Module. British Journal of
Educational Technology, 40 (2), 237 - 254.
McAlpine, M., 2002. Principles of assessment. CAA
Centre, University of Luton.
OAuth, 2013. OAuth - Getting Started. Available at:
<http://oauth.net/documentation/getting-started/>
[Accessed on 31 January 2014]
Pachler, N., Daly, C., Mor, Y., & Mellar, H., 2010.
Formative E-Assessment: Practitioner Cases.
Computers & Education, 54 (3), 715 - 721.
Sadler, D. R. (2013). Opening up feedback.
Reconceptualising Feedback in Higher Education:
Developing Dialogue with Students, 54.
Sitthiworachart, J., Joy, M., & Sutinen, E., 2008. Success
Factors for E-Assessment in Computer Science
Education. In C. Bonk et al. (Eds.), Proceedings of
World Conference on E-Learning in Corporate,
Government, Healthcare, and Higher Education, 2287
- 2293.
Tselonis, C., & Sargeant, J., 2007. Domain-specific
Formative Feedback through Domain-independent
Diagram Matching. In F. Khandia (Ed.), 11th CAA
International Computer Assisted Assessment
Conference, 403 - 420. Loughborough, UK:
Loughborough University.
ATechnologyEnhancedAssessmentSystemforSkillandKnowledgeLearning
191