
exist. We make this assumption here. i.e belief theo-
ries must necessarily only be semi-normal default the-
ories.
Example 2 With Reiter semantics, default extensions
of ∆
RBot
would be
E
RBot1
= Cn({ at(A), path(A, B),
clear(A, B), path(B, C), path(C, F ), ¬carry(p),
trapped(P, F ), clear(B, C), ¬clear(B, D),
¬clear(D, F ), clear(C, F ), ...})’
E
RBot2
= Cn({ at(A), path(A, B),
clear(A, B), path(B, C), path(C, F ), ¬carry(p),
trapped(P, F ), clear(B, D), clear(D, F ),
¬clear(C, F ), ...})
etc.
To operate, an agent program needs to commit with
one extension of its default theory. There is an exten-
sion selection function for agent to select most pre-
ferred extension from set of extensions of its default
theory for further execution. Let S
E
be a extension
selection function, if B = S
E
(∆), then (1) B is a de-
fault extension of ∆ and (2) B is the most preferred
extension by the agent at the time where S
E
is ap-
plied. In the rest of this paper, the current agent belief
set will be denoted by B = S
E
(∆) given an agent
belief theory B = h∆, S
E
i.
Belief Change Operators
Belief change is a complicated issue. There have
been several well known work on belief change such
as (Alchourr
´
on et al., 1985; Ghose et al., 1998; Meyer
et al., 2001; Darwiche and Pearl, 1997; Ghose and
Goebel, 1998). In this paper, we do not discuss this
issue in detail. However for the completeness of out
system, we adopt belief change framework of (Ghose
et al., 1998). We denote ◦
g
(respectively −
g
) as
Ghose’s revision (respectively contraction) operator.
When updating agent belief theory, we assumes
that (1) the belief to be revised must be a consistent
belief, (2) the belief to be revised must be consistent
with the set of the base facts of belief theory, (3) the
belief to be contracted must not be a tautology and (4)
the belief to be contracted must not be entailed by the
base facts.
Goals, Triggers, Plans and Intentions
We follow the original definitions in (Rao, 1996) to
define goals and triggering events. Two types of goals
are of interest: achievement goals and test goals. An
achievement goal, denoted !g(
~
t), indicates an agent’s
desire to achieve a state of affairs in which g(
~
t) is true.
A test goal, denoted ?g(
~
t), indicates an agent’s desire
to determine if g(
~
t) is true relative to its current be-
liefs. Test goals are typically used to identify unifiers
that make the test goal true, which are then used to
instantiate the rest of the plan. If b(
~
t) is a belief and
!g(
~
t) is an achievement goal, then +b(
~
t) (add a be-
lief b(
~
t)), −b(
~
t) (remove a belief b(
~
t)), +!g(
~
t) (add
an achievement goal !g(
~
t)), and −!g(
~
t) (remove the
achievement goal g(
~
t)) are triggering events.
An agent program includes a plan library. The
original AgentSpeak (Rao, 1996) definition views a
plan as a triple consisting of a trigger, a context (a
set of pre-conditions that must be entailed by the cur-
rent set of beliefs) and a body (consisting of a se-
quence of atomic actions and sub-goals). We extend
this notion to distinguish between an invocation con-
text (the pre-conditions that must hold at the time that
the plan in invoked) and an invariant context (condi-
tions that must hold both at the time of plan invocation
and at the invocation of every plan to achieve sub-
goals in the body of the plan and their sub-goals). We
view both kinds of contexts to involve both hard pre-
conditions (sentences that must be true relative to the
current set of beliefs) and soft pre- conditions (sen-
tences which must be consistent with the current set
of beliefs). Soft pre-conditions are akin to assump-
tions, justifications in default rules (Reiter, 1980) or
constraints in hypothetical reasoning systems (Poole,
1988).
Definition 1 A plan is a 4-tuple hτ, χ, χ
∗
, πi where
τ is a trigger, χ is the invocation context, χ
∗
is
the invariant context and π is the body of the plan.
Both χ and χ
∗
are pairs of the form (β, α) where
β denotes the set of hard pre-conditions while α de-
notes the set of soft pre-conditions. A plan p is
written as hτ, χ, χ
∗
, πi where χ = (β, α) (also re-
ferred to as InvocationContext(p)), χ
∗
= (β
∗
, α
∗
)
(also referred to as InvariantContext(p)), π =<
h
1
, . . . , h
n
> (also referred to as Body(p)) and each
h
i
is either an atomic action or a goal. We will also
use T rigger(p) to refer to the trigger τ of plan p.
Example 3 RBot’s plan library:
p
1
= h+!at(y),({at(x)},{∅}),
({∅},{clear(x, y)}), hmove(x, y)ii
p
2
= h+!at(y),({¬at(x),path(x, y)},{∅}),
({∅},{clear(x, y)}),
h!at(x),?clear(x, y),move(x, y)ii
p
3
= h+!rescue(p, x),({∅}, {∅}),
({∅}, {trapped(p, x) ∨ carry(p)}),
h!at(x), pick(p), !at(A), release(p)ii
p
4
= h+on
fire(x),({at(x), ¬on fire(y),
path(x, y)}, {clear(x, y)}),
({∅}, {∅}), hmove(x, y)ii
p
5
= h+trapped(p, x),({∅}, {∅}),
({∅}, {∅}), h!rescue(p, x)ii
P
RBot
= {p
1
, p
2
, p
3
, p
4
, p
5
}
In example 3, plan p
1
and p
2
are RBot strategies to
get to a specific node on the map. Plan p
3
is the strat-
egy assisting RBot to decide how to rescue a person
trapped in a node. Plan p
4
is a reactive-plan for RBot
to get out of on-fire node. Plan p
5
is another reactive-
plan for RBot to try rescue a person, when RBot adds
a new belief that person is trapped at some node.
ICEIS 2004 - SOFTWARE AGENTS AND INTERNET COMPUTING
358