tionality for Analysis or Simulation of modelled
data); Code generation criteria (database, application
or presentation layers, Target language); Procedure
model criteria (Adaptability, End-to-End approach);
and Administration criteria (User management,
Model management).
Based on this evaluation criteria catalogue, 8
open-source CASE tools were evaluated with 5 pre-
defined scenarios. The results of this study were
analyzed and best practices derived.
According to this research approach, the paper is
structured as follows: section 2 identifies and de-
scribes related work about CASE and previous tool
evaluations. Section 3 pictures the evaluation
method. Section 4 defines the evaluation criteria and
scenarios, while Section 5 presents the results of the
evaluation. Section 6 discusses the evaluation results
and Section 7 concludes the paper.
2 RELATED WORK
The requirements for software engineering steadily
increased over the last decades. Thus the need for
tools, which assist engineers in the complex soft-
ware development process, soon became evident
coining the term “computer-aided software engineer-
ing” (CASE). A good definition can be found in
(Fuggetta, 1993): “A CASE tool is a software com-
ponent supporting a specific task in the software-
production process”. Those tasks can be merged into
different classes like editing, programming, verifica-
tion and validation, configuration management, met-
rics and measurement, project management, and
miscellaneous tools. Fuggetta further makes a dis-
tinction between tools, workbenches, and environ-
ments which classifies the support of only one, a
few, or many tasks in the development process.
The trend nowadays towards open-source CASE
technology aims definitely at CASE workbenches
and environments. To gain high-quality software
products as a result of CASE tool output it is impor-
tant that these tools also provide high-quality tech-
niques. Evaluation therefore is challenging and prac-
tices have already been developed in the past.
A principal approach for selection and evaluation
is given by Le Blanc and Korn (Le Blanc and Korn,
1992). They suggest a 3-stage method with: 1)
screening of prospective candidates and develop-
ment of a short list of CASE software packages; 2)
selecting a CASE tool, if any, which best suits the
systems development requirements; 3) matching
user requirements to the features of the selected
CASE tool and describing how these requirements
will be satisfied. At each stage a comparison is made
against predefined criteria, whereas the focus lies on
functional requirements. The granularity of criteria
at each stage increases, so the result of every step is
a more precise list of final tools. For evaluation a
weighting and scoring model is proposed.
Church and Matthews provide a similar work
(Church and Matthews, 1995). Their evaluation fo-
cus lies on the following four topics: code genera-
tion, ease of use, consistency checking, and docu-
ment generation. The assessment is done through
ordinal scales of ordered attributes.
As mentioned above, CASE tool evaluation is
not a simple process. To assist in avoiding potential
failures in CASE tool evaluations and therefore poor
quality products as result, Prather (Prather, 1993)
gives some recommendations in the process itself,
necessary prerequisites, knowledge about the or-
ganization, technical factors, and the management of
unrealistic/unfulfilled expectations. He clearly rec-
ommends having in mind the scope of application,
because rarely one tool only can fulfil all require-
ments.
3 EVALUATION METHOD
The evaluation was performed in four steps. At the
beginning of our work we conducted expert talks
with software engineers and an internet research for
appropriate candidates of CASE tools in the open-
source sector. Looking on the described feature set
and a first general examination of those tools we did
a further selection which. Based on expertises of
software engineers as well as on their total number
of downloads from the internet, eight open-source
CASE tools were selected for evaluation. Table 1
gives an overview on the selected tools including
their versions and the release dates.
The next step was the definition of a criteria
catalogue, based on the expertises of software engi-
neers regarding basic functionality of CASE tools.
This basic definition was followed by installation
and first test of the tools to get a general overview of
the different functionalities provided.
After these first impressions, the basic criteria
catalogue for evaluation was extended by more de-
tails based on the first impression of the CASE tools
to be evaluated. This criteria catalogue focuses on
different scenarios supported by the overall func-
tionalities provided by the tools.
Those evaluation criteria define what we expect
from CASE tools and are reflected through our five
dimensions: Modelling in general, Definition of as-
ICEIS 2009 - International Conference on Enterprise Information Systems
42