Separate heuristic evaluations were performed with
both prototypes. Problems that were identified were
summarized in reports and presented to the
developers of the system. After consideration of the
severity of the problems on one hand and of the
technical possibilities on the other, one of the
prototypes was chosen. After that an additional
heuristic evaluation of the selected prototype was
performed and again the new recommendations were
implemented. This was done in several iterations.
The second method proposed is user testing
(Rubin, 2001; Lewis & Rieman, 1994; Nielsen &
Mack, 1994; Kunyavski, 2003; Dumas & Redish,
1999). In this type of study, representatives of the
trainees are asked to perform specified tasks with the
CAT system. Their actions and comments are
recorded and analyzed. It is important to test the
CAT system with real users, as neither the designers,
nor the usability experts can foresee all the problems
that the users could have in a task completion. The
difficulties experienced by the users are analyzed
and recommendations for the CAT system
improvement are given
User testing has been performed so far for
CATS-1 and CATS-3. For CATS-1, ten
representative tasks were selected (e.g., ‘Start the
system and log on’, ‘Start/stop/pause the lesson’,
‘Take a test’, and ‘Look at the test results’). The user
testing identified many additional problems in
comparison with the heuristic evaluation some of
which were crucial for the efficient work with the
CAT systems.
6 CONCLUSIONS
IDM has been applied so far up to the
implementation phase in which presently is only
CATS-1. This phase is actually the most interesting
and intense in terms of tests both for educational
efficiency and usability. A very important new
element during this phase will be the achievements
assessment which will be the basis for evaluating the
meeting of goal knowledge level.
Although no full iteration in IDM has been
performed so far, it can be said that combining
several methods during all the phases of CAT
deployment gives a lot of useful complementary
information that puts together the efforts of the
training stake-holders, the IDM team, the software
developers, and the educational content providers.
Thus IDM seems to allows not only the building of
an optimized CAT system but leads to economy of
time and efforts by making the prevention and the
solution of problems at the most appropriate
moments of the CAT development. Although the use
of so many tests might seem quite expensive the
authors believe that after the optimization of IDM at
the end of the project the final IDM methodology
will prove to be very efficient in terms of ROI. This
claim, however, is a research question which will be
dealt with in the remaining decisive one year of the
project.
ACKNOWLEDGEMENTS
All the work presented in this paper was supproted
by the WELKOM project. We would like to
acknowledge the fruitful discussions and
collaboration with the WELKOM partners and the
NBU team.
REFERENCES
Dumas, J. & Redish, J., (1999), A Practical Guide to
Usability Testing. Portland, OR: Intellect.
Fleming, N.D. & Mills, C. (1992). Not Another Inventory,
Rather a Catalyst for Reflection. To Improve the
Academy, 11, 137-155.
Fleming N. D. (2001) Teaching and Learning Styles:
VARK Strategies Honolulu Community College.
Kuniavsky, M. (2003). Observing the user experience: A
practitioner's guide to user research. Morgan
Kaufmann, San Francisco, CA.
Kolb, D. A. (1984). Experiential Learning: experience as
the source of learning and development. New Jersey,
Prentice Hall.
Kolb, D. A. (2005). The Learning Style Inventory Version
3.1. Boston, MA Hay Resources Direct.
Kolb, A. Y. & Kolb, D. A. (2005), The Kolb Learning
Style Inventory- Version 3.1. Technical Specifications.
Experience Based Learning Systems, Inc., Case
Western Reserve University.
Nielsen, J. & Mack, R.L. (Eds) (1994). Usability
Inspection Methods, . John Wiley & Sons, New York,
NY.
Lewis, C. & Rieman, J. (1994). Testing the design with
users. In: Task-Centered User Interface
Design.Retrieved November, 13, 2006 from
http://www.hcibib.org/tcuid/chap-5.html
Paprzycki, M., Vidakovic & Ubermanowicz, S. (1995).
Comparing Attitudes Toward Computers of Polish and
Americam Prospective Teachers. Technology and
Teacher Education Annual, AACE, Charlottesville,
VA, 45-48
Rubin, J. (2001) Handbook of Usability Testing: How to
Plan, Design, and Conduct Effective Tests. John Wiley
& Sons.
WEBIST 2007 - International Conference on Web Information Systems and Technologies
548