the genetic algorithm, the weight of the complexity
rules was lowered significantly. Instead of lowering
the weight, the misjudged values can be corrected. If
a lot of faults are found in a module, which has a quite
low complexity, this value might be reassessed and
changed.
5 SWARM LEARNING
The test case prioritization system consist of a set of
individual software agents. Currently, the agents are
acting widely independent in order to calculate the
fault-proneness or the fault-revealing probability. The
implemented learning algorithm improves the fault-
proneness calculation for each agent individually. By
adding swarm intelligence features to the agents, they
will be capable to generate further information collec-
tively. By comparing learning results of a parameter,
the agents can distinguish if a learned characteristic
is valid in the whole project, in a specific part of the
software ore only in a single module. If the character-
istic of the single module is different to all others, this
is a hint to a possibly wrong parameter value.
Characteristics, which are common in the whole
project, can be abstracted and used as basic knowl-
edge for newly introduced modules or test cases. The
agents, which represent these modules or test cases
don’t need to learn the common characteristics and
can provide a better result earlier. The comparison
of the learning results also helps to identify wrongly
learned relations. Strong deviations and fluctuations
can be detected and corrected.
6 CONCLUSIONS
In this article, we introduced a learning agent-based
test case prioritization system. The system uses a ge-
netic algorithm to improve the test case prioritization
with growing knowledge from the proceeding devel-
opment and test process. Our analysis with a learn-
ing fault-proneness calculation as a base for the test
case prioritization showed that the learning agents are
able to improve the prioritization significantly. Espe-
cially if the evaluated information is wrong or impre-
cise, e.g. because of a misjudgment of the developers
in the complexity of a module, the learning algorithm
helps to reduce the effect of this inaccurate parame-
ters.
In our future work, we will extend the used genetic
algorithm to the fault-revealing calculation of the test
cases. In parallel, we investigate further techniques,
which may help improving the test case prioritiza-
tion, for example by the realization of the consistency
checking and swarm intelligence described in chap-
ters 4 and 5.
REFERENCES
Bellini, P., Bruno, I., Nesi, P., and Rogai, D. (2005).
Comparing fault-proneness estimation models. In
10th IEEE International Conference on Engineering
of Complex Computer Systems (ICECCS’05), pages
205–214.
Chittimalli, P. and Harrold, M.-J. (2009). Recomputing cov-
erage information to assist regression testing. IEEE
Transactions on Software Engineering, 35(4):452–
469.
Engström, E., Runeson, P., and Skoglund, M. (2010). A sys-
tematic review on regression test selection techniques.
Information and Software Technology, 52(1):14–30.
Fenton, N. and Neil, M. (1999). A critique of software de-
fect prediction models. IEEE Transactions on Soft-
ware Engineering, 25(5):675–689.
Kim, S., Zimmermann, T., Whitehead Jr., E. J., and Zeller,
A. (2007). Predicting faults from cached history. In
Proceedings of the 29th International Conference on
Software Engineering, pages 489–498, Los Alamitos.
IEEE Computer Society.
Malz, C. and Göhner, P. (2011). Agent-based test case pri-
oritization. In IEEE Fourth International Conference
on Software Testing, Verification and Validation Work-
shops (ICSTW), pages 149–152.
Malz, C., Jazdi, N., and Göhner, P. (2012). Prioritiza-
tion of test cases using software agents and fuzzy
logic. In 2012 IEEE Fifth International Conference on
Software Testing, Verification and Validation (ICST),
pages 483–486.
Mubarak, H. (2008). Developing flexible software using
agent-oriented software engineering. IEEE Software,
25(5):12–15.
Pech, S. and Goehner, P. (2010). Multi-agent information
retrieval in heterogeneous industrial automation envi-
ronments. In Agents and Data Mining Interaction,
volume 5980 of Lecture Notes in Computer Science,
pages 27–39. Springer, Berlin and Heidelberg.
Rauscher, M. and Göhner, P. (2013). Agent-based con-
sistency check in early mechatronic design phase.
In Proceedings of the 19th International Conference
on Engineering Design (ICED13), Design for Har-
monies, volume 9, pages 289–396. Design Society,
Seoul.
Wooldridge, M. and Jennings, N. R. (1995). Intelligent
agents: theory and practice. The Knowledge Engi-
neering Review, 10(02):115–152.
Yoo, S. and Harman, M. (2012). Regression testing mini-
mization, selection and prioritization: a survey. Soft-
ware Testing, Verification and Reliability, 22(2):67–
120.
ICAART2014-InternationalConferenceonAgentsandArtificialIntelligence
298