(contract by A), removes A from K and changes the
set K so that it no longer entails A. Revision, K
˙
+A,
modifies K to make it consistent with A before adding
A. If adding A to K is consistent with K, then the
revision operator is equal to the expansion operator.
In general there is no unique maximal set K
′
⊂ K
which does not imply A, so contraction and revision
cannot be defined uniquely. The AGM theory charac-
terises the set of “rational” contraction and revision
operators by a set of postulates (Alchourron et al.,
1985). The postulates for contraction are:
(K
˙
−1) K
˙
−A = Cn(K
˙
−A)
(closure)
(K
˙
−2) K
˙
−A ⊆ K
(inclusion)
(K
˙
−3) If A 6∈ K, then K
˙
−A = K
(vacuity)
(K
˙
−4) If not ⊢ A, then A 6∈ K
˙
−A
(success)
(K
˙
−5) If A ∈ K, then K ⊆ (K
˙
−A) + A
(recovery)
(K
˙
−6) If Cn(A) = Cn(B), then K
˙
−A = K
˙
−B
(equivalence)
where Cn(K) denotes the closure of K under logical
consequence.
While the definition looks desirable, it seems to
only apply to idealised agents, since it requires that
the belief base is closed under logical consequence.
In order to be able to revise a belief base which
cannot be assumed to be closed under logical con-
sequence, we weaken the logic and language of the
agent, so that it corresponds to a typical rule-based
agent (Alechina et al., 2006b). This approach leads
to a polynomial time algorithm, which satisfies all of
the AGM postulates but K
˙
−5, the recovery postulate.
The algorithm allows for both AGM and reasoning-
maintenance style belief revision.
In (Alechina et al., 2006b) the weakened logic is
called W. In the corresponding language, L
W
, a well-
formed formula is either a literal (P or ¬P) or a plan
(P
1
∧ ···∧ P
n
→ Q).
The only rule is a generalised modus ponens
(GMP):
δ(P
1
), . . . , δ(P
n
) P
1
∧ ··· ∧ P
n
→ Q
δ(Q)
δ is a substitution function replacing all free vari-
ables with constants. The agent uses GMP and its
belief base to reason. When the rule is fired, the
derived (ground) literal δ(Q) will be justified by
δ(P
1
), . . . , δ(P
n
).
4.1 Justifying Plans and Beliefs
A derived belief has in most previous work only been
justified by the beliefs used to derive it, but not the
plan that was fired. A reason for this is that the plans
of an agent is usually a part of a plan library that
is assumed to be correct and unlikely to be modi-
fied. However, as described in (Nguyen, 2009), this
is not always the case. In fact, even if a derived belief
(Q) turns out to be wrong, this should not necessarily
mean that any of the supporting beliefs (P
1
, .. . , P
n
) are
wrong; it may just be wrong to conclude Q from them
(i.e. the plan is incorrect). In this case it might be
better to remove the plan rather than removing P
i
to
make Q underivable. Otherwise, if the agent realizes
that P
i
is actually true and the plan was not removed,
Q can again be derived.
Giving up the assumption that plans are part of
a static library and will not be changed, means that
plans are now also beliefs. The belief base of an agent
will therefore consist of both literals and plans.
We can now define justifications. All formulas are
associated with one or more justifications. A justifi-
cation consists of a supported formula and a support
list. The formula is supported by the formulas in the
support list. We write (A, [B C]), when A is supported
by B and C. A formula is independent, if it has a non-
inferential justification (i.e. a percept, mental note or
communicated belief). In that case, the justification
has an empty support list.
Each formula P is associated with a dependency
and justification list. The dependency list for P con-
tains all justifications for P, i.e. all justifications
where P is the supported formula. The justification
list for P contains all justifications where P is a mem-
ber of the support list. If we consider the belief base
to be a graph, each justification then has exactly one
outgoing edge (to the formula it is supporting) and
zero or more incoming edges (from the formulas in
the support list). Non-inferential justifications will
have zero incoming edges.
Example 1. Consider an agent with a plan, P(x) →
Q(x), and the beliefs P(a) and P(b). The dependency
graph in figure 1(a) shows that all formulas in the be-
lief base are non-inferential.
Running the plans of the agent to quiescence (i.e.
applying the only plan to both P(a) and P(b)) yields
two new beliefs, Q(a) and Q(b). This is shown in
figure 1(b). If for instance ¬Q(b) is introduced and
we choose to contract by Q(b), we can use the rela-
tion between formulas to backtrack from Q(b) to the
formulas supporting it.
ICAART2015-InternationalConferenceonAgentsandArtificialIntelligence
184