Another problem of HMMs is that they are
actually propositional, which means they handle
only sequences of unstructured symbols. Therefore,
Kersting et al combined HMMs and first order logic
and proposed Logical hidden Markov models, which
belong to statistical relational learning methods
(Kersting et al., 2006). Comparing with HMMs,
LHMMs can infer complex relations and have fewer
parameters by adding instantiation process. However,
they do not consider relaxing Markov assumption,
which will lead to a similar performance decline
when long-term dependences between hidden states
exist as HMMs. Thus, we presented logical hidden
semi-Markov models (LHSMMs) by using the idea
of HSMMs (Zha et al., 2013). Even though our
previous work has proved the achievement of
applying LHSMMs in IR, we only consider
intentions of one single agent. However, most
complex tasks have to be done by one or more
teams. Agents always play different roles and
cooperate to achieve their common goals. In this
case, multi-agent intention recognition (MAIR)
problems have to be solved, which means that we
need to recognize not only the intentions of every
agent, but also the composition and cooperation
mode of teams (Pfeffer et al., 2009).
Since LHSMMs inherit advantages of LHMMs
and relax the Markov assumption, we will use
LHSMMs to solve MAIR problem, as an extension
to our previous work. Besides considering intentions
of more than one agent, we will refine the previous
models further in three aspects. First, logical
predicates and connectives are used to present the
working modes of the team. Second, Conditional
transition probabilities are applied which make
transition probabilities depend on previous
observations. Third, the alphabet of instances are
changeable during the inference, because the number
of simulation entities may change because of dying,
escaping and reinforcement. The former forward
algorithm with duration variable (LFAD) which is
the core of the inference is also adjusted according
to the modification of models. A simple virtual game
“Killing monsters” is designed to evaluate the
performance of LHSMMs in MAIR. In this game,
two warriors move around and kill monsters on a
grid map, they can both act individually and
cooperatively. In the simulation, we use lognormal
distribution to model the duration of working modes
(abstract hidden states), and compute the
probabilities of working modes, monsters being
chosen at each time. We will show that LHSMMs
can correctly recognize working modes and
intentions of warriors in the game. Additionally,
LHSMMs can even recognize the destinations of the
agent in advance by making use of the cooperation
information.
The rest of the paper is organized as follows: the
next section will gives the formal definition of
LHSMMs, the inference algorithm, and a directed
graphical representation of a game is presented.
Section 3 presents the simulation and results.
Subsequently, we have a conclusion and discuss the
future work in Section 4. In this paper, we will apply
LHSMMs to recognize intentions of two agents and
their working modes in a simple virtual game.
2 LOGICAL HIDDEN SEMI-
MARKOV MODELS
This section will introduce the LHMMs, which will
be used to recognize intentions of agents and the
working mode. We will give a formal description of
models and the inference process in 3.1 and 3.2
respectively. A multi-agent game is also designed to
evaluate the models in 3.3.
2.1 Model Definition
LHSMMs extend LHMMs by modelling the
duration of the hidden abstract states just as HSMMs
extend HMMs. In this paper, we further refine our
former models by redefining the logical alphabet,
the selection probability and the transition matrix.
A LHSMM is a five-tuple
,,,,Ms ΣμΔ D
,
the
{}
t
Σ records possible instances for the
variables in every abstract state at every time. Since
the number of simulation entities may change
because of dying, escaping or reinforcement, the
Σ
depends on observations available up to time t, (
t
is the logical alphabet at time t given
1: 1 2
,,,
tt
OOOO
).
{}
t
μ
is a selection
probabilityset over
Σ
,thus it is also a function of
1:t
O
.
Δ
is the transition matrix defining transition
probabilities between abstract states. Abstract
transition are expressions of the form
:
O
pH B
where
0,1p
,
B
,
and
O
are logic sentences
which represents hidden states. A
is a substitution,
and
B
is one state of
GB
, where
GB
represents the set of all ground or variable-free
atoms of
B
over the alphabet
, so are
and
O
.
We also use the idea of logical transition in
Natarajan et al.’s LHHMMs (2008) and let the value
SIMULTECH2014-4thInternationalConferenceonSimulationandModelingMethodologies,Technologiesand
Applications
702