
In general, Thestes selected test cases with higher 
amount of dirty places at the end of the tests, which 
are considered test cases with worse performance. 
Table 3 illustrates five episodes about the 
simulation of the interaction between Agent  in  Env, 
in environment which obtain average utility value 
for the SR_Partial. 
Table 3: Partial history of Agent in Env. 
k  P
  A
  – av
E
(P
, A
)  – av
L
(P
, A
) 
1  ...,Clean,…  Below  2.0  -1.0 
2  ...,Clean,...  Right  2.0  -1.0 
3  ...,Clean,...  Below  2.0  -1.0 
4  ...,Dirty,...  Aspire  1.0  -2.0 
5  ...,Clean,...  Left  2.0  -1.0 
The environment selected is composed of places 
with the following configuration: [[C, C, C, C, D], 
[C, C, D, D, C], [C, D, D, D, D], [C, C, C, D, D], [C, 
C, D, D, D]]. The utility value is U = 15.5 and the 
values of inadequacy: –f
E
 = 49.0 e –f
C
 = -26.0. The 
other episodes related to the history of Agent in Env 
follow the same pattern. As expected, the cleaner 
agent is more adequate to the environment 
considering the cleaning objective than the energy 
objective. A brief analysis of the condition-action 
rules of the cleaner agent confirms this proposition. 
The history obtained by RIS_Partial follows the 
same pattern. 
Thus, as expected, the cleaner agent with simple 
reactive architecture and partial observability 
presents the worst performance in the evaluation, to 
realize a brief analysis in the condition-action rules 
the agent not consider the perceptions and the 
actions previous related to energy and cleaning 
objectives. As the cleaner agent was designed as a 
simple reactive agent, little can be done to improve 
their performance. In this sense, an extension in its 
structure is required in order to widen the 
observability of the environment, allowing it to 
choose actions better. Consequently, the agent will 
be able to economize energy avoiding places has 
been visited. 
6 CONCLUSIONS 
Considering which the rational agent should be able 
to accomplish your goals, appropriate tests should be 
developed to evaluate the actions and plans executed 
by the agent when achieving these goals. In this 
context, techniques that consider the peculiar nature 
of the agent are required. 
The proposed approach considers that in the case 
of rational agents, where the measure of 
performance evaluation is established by the 
designer, multiple objectives, possibly conflicting, 
must be considered. In the proposed approach, the 
test results should indicate the average performance 
of the agent and, especially, the goals that are not 
being meeting, as well as information about the 
stories of the agent, which are useful to identify the 
agent behaviors that need to be improved. 
The information generated by the approach 
indicates a measure of utility associated with the 
performance of the tested agent and objectives in the 
evaluation measure that are not being satisfied. 
Considering the best set of stories of the agent in the 
environment, associated with the set of test cases 
selected by the approach to end of the search 
process, the designer and / or other auxiliary 
automated systems can identify those problematic 
episodes with are causing the unsatisfying 
performance at the agent. 
As future work, we suggest a case study with 
objective-based and utility-based agents. 
Additionally, adapt the approach to provide a testing 
strategy capable of test the agent interaction in 
multiagent systems. 
REFERENCES 
Holland, J. (1975) Adaptation in natural and artificial 
systems. University of Michigan Press. 
Houhamdi, H. (2011) “Test Suite Generation Process for 
Agent Testing”, In: Indian Journal of Computer 
Science and Engineering (IJCSE), v. 2, n. 2. 
Mylopoulos J.; Castro J. (2000) Tropos: A Framework for 
Requirements-Driven Software Development. 
Information Systems Engineering: State of the Art and 
Research Themes, Lecture Notes in Computer 
Science, Springer. 
Nguyen, C. D. (2008) Testing Techniques for Software 
Agents. PhD Dissertation. University of Trento. 
Nguyen, C. D.; Perini, A.; Tonella, P.; Miles, S.; Harman, 
M.; Luck, M. (2009) Evoluctionary Testing of 
Autonomous Software Agents. In: 8th Int. Conf. on 
Autonomous Agents and Multiagent Systems. 
Budapest, Hungary. 
Nguyen, C.; Perini, A.; Bernon, C.; Pavón, J.; 
Thangarajah, J. (2011) Testing in multi-agent systems. 
Springer. v. 6038, p. 180-190. 
Poutakidis, D.; Winikoff, M.; Padgham, L.; Zhang, Z. 
(2009) Debugging and Testing of Multi-Agent 
Systems using Design Artefacts. Springer Science 
Business Media, LLC. 
Russell, S.; Norvig, P. (2004) Inteligência Artificial: uma 
abordagem moderna. 2 ed. São Paulo: Prentice-Hall. 
ICEIS2014-16thInternationalConferenceonEnterpriseInformationSystems
512