can be deferred until a decision needs to be made,
typically before a particular action is taken. Poorly
chosen substitutions during the practical reasoning
phase may indicate the choice of the wrong resource,
which may become apparent later during the
execution of the plan thus instantiated. Several ways
of handling substitutions during agent execution are
currently proposed. The 2APL system (Dastani,
2008) offers the possibility to create rules that specify
decision processes including the handling of possible
substitutions. For the CAN system (Sardina &
Padgham, 2011), knowledge of the substitutions used
is stored and, if a new plan has to be searched if a
previous attempt to achieve the intentions failed with
some substitutions, other possible substitutions can
be chosen for the same plan.
Our approach is based on trying to preserve all
possible variable bindings and during the execution
of a plan, it works with all of them. If some step (act)
of the plan is executed, then the number of these
options may be limited and only some of them survive
on. However, if at least one of the options persists,
then the plan can continue. We first introduced the
principle that leads to late variable binding in (Zboril,
Koci, Janousek, & Mazal, 2008) and later discussed
it in (Zboril, Zboril, & Kral, Flexible Plan Handling
using Extended Environment, 2013). In this paper, we
give a more precise and modified form of the basic
operations we need to create a late-binding
AgentSpeak(L) interpreter and specify the use of each
operation here.
The individual sections are organized as follows.
In Section 2, we briefly introduce the AgentSpeak(L)
language and give the motivation for introducing late
binding for its interpreter. Section 3 defines the
operations that are necessary for such an
interpretation, and in Section 4, we introduce the
concept of weak plans and events, which we build on
by introducing the execution of a single plan or a
hierarchy of plans with a late binding approach.
2 EXECUTION OF AgentSpeak(L)
An agent that is executed according to some program
written in AgentSpeak(L), is a tuple
〈
𝑃𝐵,𝐸𝑄,𝐵𝐵,𝐴𝑆,𝐼𝑆,𝑆
,𝑆
,𝑆
〉
where PB is a plan base,
EQ is an event queue, BB is a belief base, AS is a set
of actions that are executable by the agent, IS is a set
of intention structures and 𝑆
,𝑆
,𝑆
are functions for
selections of events, options and intentions (Rao,
AgentSpeak(L): BDI agents speak out in a logical
computable language, 1996). The program itself
written in this language defines knowledge as
formulas of first-order predicate logic, for simplicity
we will consider beliefs in the form of atomic
formulas of this logic as facts are defined in PROLOG
as a predicate. Goals can be declared as achievement
or test goals. In the first case, the predicate is given
with the preposition ! and in the latter with the
preposition ?. The core part of a program in
AgentSpeak(L) is the plan of how to achieve some
goal in the form of plans. This looks like the
following 𝑡
:Ψ⟵𝑝𝑙𝑎𝑛 and consists of the
triggering even 𝑡
, context conditions Ψ and plan’s
body. Triggering events are written in the form
+event or -event, where an event can be a test goal,
an achievement or just a predicate. A plan is then
relevant to some goal in the form of an event, when
its triggering event is unifiable with that event, and it
is further applicable if the context conditions are valid
in the current state of the agent's BB. If we have a
substitution for which the plan is relevant and
applicable, then the plan is a possible means of
achieving the goal.
The interpretation of an agent programmed in
AgentSpeak(L) is presented in the original paper
(Rao, AgentSpeak(L): BDI agents speak out in a
logical computable language, 1996), and the
operational semantics is presented more formally in
(Winikoff, 2005). The problem we address and offer
a solution to is that if a plan is chosen, a substitution
is chosen that may not be appropriate for the agent's
next acts. As difficult as it is to predict which of the
possible behaviors will lead to success in a dynamic
environment, it is possible to either try to estimate
these substitutions based on previous experience, or
to defer the decision about substitutions until this
becomes necessary. We use the latter approach and
now wish to demonstrate its potential appropriateness
and usefulness.
Let us return to the aforementioned interpretation
of plan selection for some goal in the form of an event
e. Then there may exist one or more plans from
agent’s plan base such that their triggering events are
unifiable with e for some most general unifier (mgu)
σ. Thus, let us have such plans
𝑡𝑒
:Ψ
⟵𝑝𝑙𝑎𝑛
,
𝑡𝑒
:Ψ
⟵𝑝𝑙𝑎𝑛
… 𝑡𝑒
:Ψ
⟵𝑝𝑙𝑎𝑛
and 𝜎
𝑚𝑔𝑢
𝑒,𝑡𝑒
, 𝜎
𝑚𝑔𝑢
𝑒,𝑡𝑒
… 𝜎
𝑚𝑔𝑢
𝑒,𝑡𝑒
are
the mgu for the event and individual triggering
events. These plans are relevant but only applicable if
𝐵𝐵| Ψ
𝜎
for the first plan, etc. Thus, again, we
look for substitutions that unify the individual context
conditions in the agent's BB, let’s denote them as
𝜌
,𝜌
…𝜌
. Then the agent selects one of these plans
to achieve the goal represented by event e, say plan j,
and the agent is going to execute the body of this plan