This paper presents an approach that aims to
contribute to the process of rational agents testing.
The assumptions are that any test depends on the
selected test cases, which should generate
information to identify the components in the
structure of the artificial agent tested program that
are causing unsatisfactory performance. More
specifically, the proposed approach consists of
designing an agent that performs the monitoring
tests of the rational agent and the designer manages
relevant information about the performance and
agent faults during testing, while making
improvements in the agents of the program.
2 BACKGROUND
2.1 Rational Agents
The rational agents select its actions aiming at the
best possible outcome, or in the presence of
uncertainty, the best expected outcome as a
performance measure established to evaluate their
behavior. Designing rational agents in complex task
environments is a nontrivial task (Russell and
Norvig, 2013; Silveira et al., 2013).
The work of Artificial Intelligence is to design
the agent program, which implements the function
of the agent and will run on any architecture, ie, a
computing device with actuators and sensors.
Depending on the environment, the design of the
agent can be performed considering four basic types
of agent programs: (1) simple reactive agents (select
actions based on current perception, ignoring the
historical perceptions), (2) model-based reactive
agents (the agent keeps an internal state that depends
on the historical perceptions), (3) goal-based agents
(beyond the internal state, the agent keeps
information about the goals which describe desirable
situations); (4) utility-based agents (have a utility
function that maps a state in an associated degree of
happiness). In the environment where the agent does
not know the possible states and the effects of their
actions, the conception of a rational agent can
request an agent program with learning capabilities
(Russell and Norvig, 2013).
The four types of programs agents can be
subdivided into three main subsystems that process
information. The first, the perception subsystem,
maps a perception data (P) in an abstract
representation (State) useful to the agent, see: P →
State. The second, the update internal state
subsystem, maps representing the current perception
and information about the internal state (IS) held by
the agent on a new internal state, next: State x IS →
IS. Finally, the decision-making subsystem, maps
information about the internal state on a possible
action (A), action: IS → A (Wooldridge, 2002).
For the simple reactive agent program, the action
function selects actions based on the current
perception, mapped by the see function, and a set of
rules in the condition-action format. The next
function in model-based reactive agents keep a
description of the environmental state of the agent in
memory. The action function of the goal-based
agents programs selects its actions using the
information processed by the next function and
information on the goals that describe the desirable
situations in the environment. The action function of
utility-based agents uses a utility function to map
descriptions of the environmental state as an
associated happiness degree.
2.2 Testing Agents
Software testing is an activity that aims to evaluate
and improve the product quality by identifying
defects and problems. A successful test for detecting
defects is the one that makes the system operate
incorrectly and as a consequence, exposes the
defects (Sommerville, 2011; Pressman and Maxim,
2014).
Due to the peculiar properties of rational agents
(reactive properties, of memory, goal-based and
utility, and the learning) and its task environments,
there is a demand for new test techniques related to
the particular nature of agents. For the testing of
intelligent agents, it is necessary that the existing
software testing techniques are adapted and
combined at aiming to detect different faults,
making software agents more reliable. Most works
of literature consist of adaptations of the techniques
from conventional software testing. In the case of
the rational agents, we know that these adaptations
should seek to evaluate the rationality of actions and
plans executed by the agent in its task environment
(Houhamdi, 2011a; Houhamdi, 2011b).
Test input selection for intelligent agents
presents a problem due to the very fact that the
agents are intended to operate robustly under
conditions which developers did not consider and
would therefore be unlikely to test (Padgham et al.,
2013).
Some approaches focus on the production of the
test artifacts to support the development
methodologies for agent systems (Nguyen, 2008).
The assumption in most studies is that a good
evaluation of agent depends on the test case selected.
ICEIS2015-17thInternationalConferenceonEnterpriseInformationSystems
586