automated scoring and feedback (Molnár et al.,
2017). In the next stage of development, technology
was used, beyond providing automated feedback, to
change item formats and replicate complex, real-life
situations, using authentic tasks, interactions,
dynamism, virtual worlds, collaboration (second- and
third-generation computer-based tests; Pachler et al.,
2010; Molnár et al., 2017) to measure 21st-century
skills. Thus, the use of technology has strongly
improved the efficiency of testing procedures: it
accelerates data collection, supports real-time
automatic scoring, speeds up data processing, allows
immediate feedback, and revolutionizes the whole
process of assessment, including innovative task
presentation (for a detailed discussion of
technological issues, see Csapó, Lőrincz, and Molnár,
2012). In the 2010s, it was no longer debated; CBA
became mainstream over traditional testing.
It started a new direction in the development and
re-thinking of the purpose of assessment. Two new
questions arise: (1) how can we use assessment to
help teachers tailor education to individual students’
needs? And, thus, how can we use assessment for
personalized learning? And (2) how can information
gathered beyond the answer data (e.g. time on task
and repetition) be used and contribute to
understanding the phenomenon and learning process
under examination to provide more elaborated
guidance and feedback to learners and teachers
instead of using single indicators, such as a test score?
The development and scope of the eDia system,
which is in the focus of the paper, fits this issue and
the re-thinking of the assessment process. Among
other functions, the primary function of the system is
to provide regular diagnostic feedback for teachers on
their students’ development in the fields of reading,
mathematics and science from the beginning of
schooling to the end of the six years of primary
education and to allow significantly more realistic,
applications-oriented and authentic testing
environments to measure more complex skills and
abilities than are possible with traditional
assessments.
3.1 The eDia System
In its present form, the eDia online assessment system
is a technology-based, learning-centred and
integrated assessment system. It can be divided into
two parts: (1) the eDia platform, the software
developed for low-stakes TBA, using a large number
of items and optimized for large-scale assessment (up
to 60,000 students at exactly the same time); (2) the
item banks with tens of thousands of empirically
scaled items in the fields of reading, mathematics and
science.
The hardware infrastructure is based on a server
farm at the University of Szeged. The online
technology makes it possible for the eDia system not
only to be available in Hungary, but also to be used
for numerous assessment purposes in any country in
the world (for more detailed information, see Csapó
and Molnár, submitted).
The eDia system integrates and supports the
whole assessment process from item writing to well-
interpretable feedback. The easy-to-use item builder
module makes it possible to develop first-, second-
and third-generation tasks using any writing system.
(The eDia system has already been used to administer
tests in Chinese, Arabic and Russian, among other
languages.) Thus, the system can be used to measure
complex constructs requiring innovative item types,
new forms of stimuli, such as interactive, dynamically
changing elements (e.g. to measure problem solving
in the MicroDYN approach; Greiff et al., 2013;
Molnár and Csapó, 2018) or simulation-based items
(e.g. to measure ICT literacy; Tongori, 2018). A real
human–human scenario is also possible during data
collection (e.g. to measure collaborative problem
solving; Pásztor-Kovács et al., 2018). These complex,
mainly interactivity- and simulation-based item
formats have been used for research and assessments
beyond the diagnostic system, which is mainly based
on first- and second-generation computer-based
items, but the results will also be applied to diagnostic
assessments in the long term.
The item editing module of the system also
contains the scoring part of the tasks (a task can be
constructed of several items), which makes it possible
to employ different ways of scoring from very simple
task-level dichotomous scoring to very complicated
scoring methods, generally used by items with
multiple solutions (e.g. combinatorial tasks). This
scoring sub-module provides the information for the
automated feedback module of the system.
The eDia system is prepared for both automated
and human scoring as well. The automatic scoring
forms the basis for the immediate feedback provided
by the diagnostic assessments. Human scoring is
reserved for research purposes.
The test editing module of the system is
responsible for test editing, thus forming tests out of
the tasks in several ways. Tests can be constructed
with traditional methods (using fixed tests for
everybody in the assessment). They can also be
created out of different tests from previously fixed
booklets, thus eliminating the position effect and
optimizing anchoring within the tests (at the present
CSEDU 2019 - 11th International Conference on Computer Supported Education
124