tested or some test scripts run as threads on a sequen-
tial machine. These projects include project TAOS,
project AGEDIS, and product TestSmith. Project
TAOS (Testing with Analysis and Oracle Support) de-
veloped a toolkit and environment supporting analysis
and testing processes (Richardson, 1994). A unique
aspect of TAOS is its support for test oracles and their
use to verify behavioral correctness of test executions.
The project AGEDIS developed a methodology and
tools for the automation of software testing in gen-
eral, with emphasis on distributed component-based
software systems (Hartman and Nagin, 2004). The
commercial product TestSmith from Quality Forge
enables parallel test script playback on one sequential
workstation. Each playback runs in its own thread.
This allows multiple scripts to be played at the same
time, greatly speeding up the testing process.
The basic feature that distinguishes our approach
from the above listed is the use of clusters for parallel
testing.
3 CONSTRUCTION AND
PREPARATION OF TEST
CASES IN PARALLEL
To automate test running, test cases are to be prepared
and organized in structures that are suitable for sep-
aration of code and data of the test case, test cases
via test scripts, changing test cases, and changing test
scripts.
From the view of our paper it is not important
whether test cases are generated by a tool or con-
structed manually. Our focus is on running test cases
in parallel not on their construction. Our goal was not
to develop new methods for test case generation but
we needed a set of test cases as experimental data.
To illustrate advantages of running test cases in
parallel in a cluster we needed a large number of
test cases that cannot be written manually. To obtain
enough test cases for our experiments we used the
public domain tools TestGen4J. The process of test
case preparation starts the open-source tool Emma
that instruments code before testing. Instrumented
code serves to save coverage information. After test-
ing, Emma collects coverage information and gener-
ates reports in HTML, text, or XML.
Emma does not construct test cases. For this pur-
pose we needed the tool TestGen4J. Differently from
JUnit Test Generator that generates only empty bod-
ies of methods, TestGen4J uses generation based on
boundary values of method parameters. The user
has to define them by the help of rules. TestGen4J
works together with two others tools - JTestCase and
JavaDoc. To get the information about instrumented
classes for TestGen4J, e.g. description of classes, de-
scription of methods, constructors, variables, etc., we
used JavaDoc which is part of the J2SDk from Sun.
JUnit that was chosen for test case processing holds
test cases (code) and their test case data together.
This brings disadvantages when test data should be
changed. Because of that we used JTestCase that sep-
arates test case code from test case data. For one test
case code TestGen4J generates test cases data for all
combinations of parameter boundary values. Using
JTestCase the load, run, and evaluate procedures can
run in a loop for one test case code using more test
case data.
Summarized we prepared test cases as follows
instrumentation of application classes by the tool
Emma, separating information about instrumented
classes by tool Java Doc, test cases generation by tool
TestGen4J, and separation of test case code from test
case data by tool JTest Case. After preparation of test
cases they can be executed and evaluated using JUnit
as described below.
4 THE CLUSTEST SYSTEM FOR
RUNNING TEST CASES
The system CLUSTEST consists of a front-end (run-
ning partially as a local host and partially as a remote
AFS host on a sequential PC) and a back-end (running
on the cluster). Both modules communicate via mes-
sages and share files stored in AFS (Andrew File Sys-
tem). The architecture and communication schema is
shown in Fig. 1.
The front-end module represents the interface to
the user (to select test scripts and to start them) and
controls the processing. It runs on a sequential PC
with a Java Virtual Machine.
First, the tool Emma will be started which pro-
duces instrumented classes from classes of the appli-
cation under test. Instrumented classes contain some
additionally inserted code necessary for test cover-
age. After all classes of the application have an instru-
mented counterpart, the front-end starts the tool Test-
Gen4J that generates test cases. After all test cases
have been generated, the test manager from the back-
end obtains the instrumented classes and generated
test cases and uses the available number of cluster
modes to run and evaluate them by the help of the
Java test framework JUnit. Results will be stored in
a log file and later used to produce a report by the
front-end.
When test cases are prepared as described above,
ENASE 2008 - International Conference on Evaluation of Novel Approaches to Software Engineering
178