POSSIBILISTIC ACTIVITY RECOGNITION
Patrice C. Roy
1
, Bruno Bouchard
2
, Abdenour Bouzouane
2
and Sylvain Giroux
1
1
Domus laboratory, Université de Sherbrooke, Sherbrooke, Canada
2
LIAPA laboratory, Université du Québec à Chicoutimi, Chicoutimi, Canada
Keywords:
Ambient intelligence, Activity recognition, Possibilistic description logic, Smart homes, Cognitive assistance.
Abstract:
The development towards ambient computing will stimulate research in many fields of artificial intelligence,
such as activity recognition. To address this challenging issue, we present a formal activity recognition frame-
work based on possibility theory, which is largely different from the majority of all recognition approaches
proposed that are usually based on probability theory. To validate this novel alternative, we are developing an
ambient agent for the cognitive assistance of an Alzheimer’s patient within a smart home, in order to identify
the various ways of supporting him in carrying out his activities of daily living.
1 INTRODUCTION
Combining ambient assisted living with techniques
from activity recognition greatly increases its accep-
tance and makes it more capable of providing a better
quality of life in a non-intrusive way. Elderly peo-
ple, with or without disabilities, could clearly benefit
from this new technology (Casas et al., 2008). Ac-
tivity recognition aims to recognize the actions and
goals of one or more agents from a series of obser-
vations on the environmental conditions. Due to its
many-faceted nature, research addressing the recog-
nition problem in smart environments refer to activ-
ity recognition as plan recognition, which relates be-
haviours to the performer’s goals. The plan recogni-
tion problem has been an active research topic (Au-
gusto and Nugent, 2006) for a long time and still
remains very challenging. The keyhole, adversarial
or intended plan recognition problem (Geib, 2007)
is usually based on a probabilistic-logical inference
for the construction of hypotheses about the possible
plans, and on a matching process linking the observa-
tions with some activity models (plans) related to the
application domain.
Prior works have been done to use sensors, like
radio frequency identification (RFID) tags attached
to household objects (Philipose et al., 2004), to rec-
ognize the execution status of particular types of
activities, such as hand washing (Mihailidis et al.,
2007), in order to provide assistive tasks like, for in-
stance, reminders about the activities of daily living
(ADL) (Pollack, 2005). However, most of these re-
searches has largely focused on probabilistic models.
One limitation of probability theory is that it is in-
sufficient to handling imperfect information, which is
impressed of uncertainty and imprecision. In the con-
text of cognitive assistance, where the human agent
is characterized by erratic behaviours, complete ig-
norance about the specific dependence between two
actions cannot be represented with the classical prob-
ability theory. The possibility theory (Dubois and
Prade, 1988), an alternative to probability theory, is
an uncertainty theory devoted to the handling of in-
complete information. By using a pair of dual set-
functions (possibility and necessity measures) instead
of one, this theory allows us to capture partial igno-
rance, so that it is possible to represent partial belief
about events. Also, it is more easier to capture partial
belief concerning the activities realization from hu-
man experts, since this theory was initially meant to
provide a graded semantics to natural language state-
ments (Zadeh, 1978).
At the Domus and LIAPA labs, we investigate
possibility theory to address this issue of recogniz-
ing behaviours classified according to cognitive er-
rors. These recognition results are used to iden-
tify the various ways a smart home may help an
Alzheimer’s occupant at early-intermediate stages to
carry out his ADLs. This context increases the recog-
nition complexity in such a way that the presumption
of the observed agent’s coherency, usually supposed
in the literature, cannot be reasonably maintained. We
184
C. Roy P., Bouchard B., Bouzouane A. and Giroux S. (2010).
POSSIBILISTIC ACTIVITY RECOGNITION.
In Proceedings of the 2nd International Conference on Agents and Artificial Intelligence - Artificial Intelligence, pages 184-189
DOI: 10.5220/0002701801840189
Copyright
c
SciTePress
propose a formal framework for activities recogni-
tion based on description logic and possibility theory,
which transforms the recognition problem into a pos-
sibilistic classification of activities. The possibility
and necessity measures on behaviour hypotheses al-
low us to capture the fact that, in some case, erroneous
behaviours concerning the realization of activities can
be equally possible than normal behaviours. Hence,
in a complete ignorance setting, both behaviour types
are fully possible, where each type is not necessarily
the one being carried out. So, unlike probability the-
ory, possibility theory is not additive.
The paper is organized as follows. Section 2
presents our new possibilistic recognition model.
Section 3 presents an overview of related work. Fi-
nally, we conclude the paper by outlining future plans
with this work.
2 POSSIBILISTIC ACTIVITY
RECOGNITION MODEL
In our model, the observer agent has knowledge
concerning the resident’s environment, which is
represented by using a formalism in description
logic(DL) (Baader et al., 2007). DL is a family
of knowledge representation formalisms that may be
viewed as a subset of first-order logic, and its expres-
sive power goes beyond propositional logic, although
reasoning is still decidable. By using the open world
assumption, it allows us to represent the fact that the
environment is partially observable. The observation
of the environment’s state with sensors allows us to
obtain the low–level context C of the environment.
Since, the observation can be partial, this context can
represent a subset of the environment’s state space S
(C S), where states of this subset share some com-
mon environmental properties. For instance, the con-
text where the patient is in the kitchen, the pantry door
is open, and the pasta box is in the pantry includes
several possible states. Also, a set of contexts can be
a partition of the environment’s state space.
In order to infer behavioural hypotheses about the
realization of activities by an observed patient, the no-
tion of possibilistic actions must be formalized, since
activities are carried out by performing a sequence of
actions that affect the environment’s state. A possi-
bilistic action on the set of environment’s states S is a
nondeterministic action where the transitions between
states are quantified with a possibility distribution.
Definition 2.1 (Possibilistic Action). A possibilistic
action a is a tuple (C
pre
a
,C
pos
a
,π
init
a
,π
trans
a
), where
C
pre
a
and C
pos
a
are context sets and π
init
a
and π
trans
a
are possibility distributions.
C
pre
a
is the set of possible contexts before the ac-
tion occurs (pre–action contexts), C
pos
a
is the set of
possible contexts after the action occurs (post–action
contexts), π
init
a
is the possibility distribution on C
pre
a
that an environment’s state in a particular context al-
lows the action to occur, and π
trans
a
is the transition
possibility distribution between contexts in C
pre
a
and
C
pos
a
if the action does occur.
The action library is represented with an ontology,
where the set of possible actions A is partially ordered
with the action subsumption relation
A
, which can
be seen as an extension of the concept subsumption
relation of DL (Baader et al., 2007).
Proposition 2.2 (Action subsumption). Let a, b A
be two action tuples (C
pre
a
, C
pos
a
, π
init
a
, π
trans
a
) and
(C
pre
b
, C
pos
b
, π
init
b
, π
trans
b
). If an action b is sub-
sumed by an action a, denoted by b
A
a, then for
all context d in C
pre
b
, there exists a context c in C
pre
a
where d c, π
init
b
(d) 6 π
init
a
(c), and for each con-
text e in C
pos
b
, there exists a context f in C
pos
a
where
e f and π
trans
b
(e|d) 6 π
trans
a
( f|c).
For instance, the OpenDoor action subsumes the
OpenPantryDoor action, where the OpenDoor is at
least as possible than OpenPantryDoor in contexts
where OpenPantryDoor can be carried out or ob-
served.
With this action library, the recognition agent
evaluates the most possible action that can explain
the changes observed in the environment. An obser-
vation at a time t, denoted by obs
t
, consists to a set
of DL assertions describing, according to the sensors,
the environment’s state resulting from an action re-
alization. Since the observation obs
t
can be partial,
multiple contexts c
i
can be entailed by this observa-
tion (obs
t
|= c
i
), which influences the possibility and
necessity measures of observation for each action.
To determine such possibility and necessity mea-
sures of action observation, a possibility distribution
on the action library concerning the possibility that a
particular action was observed according to the previ-
ous action prediction possibilities (possibility that an
action will be the next one carried out) and the cur-
rent action recognition possibilities (possibility that
an action is the one that was carried out) must be eval-
uated. The action prediction possibility distribution
at a time t, π
pre
t
, is obtained by selecting, for each
action a A , the maximum possibility value among
the action initiation possibilities π
init
a
(c
i
) for the pre–
action contexts c
i
C
pre
a
entailed by the observation
obs
t
. The action recognition possibility distribution at
a time t, π
rec
t
, is obtained by selecting, for each action
a A , the maximum possibility value among the ac-
tion transition possibilities π
trans
a
(c
i
,c
j
) for the pre–
contexts c
i
C
pre
a
entailed by the previous observa-
POSSIBILISTIC ACTIVITY RECOGNITION
185
tion obs
t1
and the post–contexts c
j
C
pos
a
entailed
by the current observation obs
t
. Since the prediction
possibilities must be taken into account when evalu-
ating the action observation possibilities, the obser-
vation addition operator
obs
is used on the previous
prediction possibility distribution π
pre
t1
and the cur-
rent recognition possibility distribution π
rec
t
to com-
pute the current action observation possibility distri-
bution π
obs
t
. The
obs
operator selects, for each ac-
tion a A , the maximum possibility value between
the prediction possibility π
pre
t1
(a) and the recogni-
tion possibility π
rec
t
(a), in order to obtain the obser-
vation possibility π
obs
t
(a).
So, for each observation obs
t
, we evaluate the ac-
tion observation possibility distribution π
obs
t
, which
allows us to select the most possible observed action
at the time t, according to the possibility and neces-
sity measures of action observation, Π
obs
t
and N
obs
t
.
Those measures, which allow us to indicate the possi-
bility Π
obs
t
(Act) and necessity N
obs
t
(Act) that an ac-
tion a in a subset Act A ({a} is also a subset) was
observed by the observer agent, according to the en-
vironment’s state described obs
t
, are given by:
Π
obs
t
(Act) = max
aAct
(π
obs
t
(a)), (1)
N
obs
t
(Act) = max
{bA }
(π
obs
t
(b)) Π
obs
t
(Act), (2)
= min
a/Act
max
{bA }
(π
obs
t
(b)) π
obs
t
(a)
. (3)
Π
obs
t
(Act) is obtained by taking the maximum value
among the observation possibilities π
obs
t
(a) of the ac-
tions a in Act. N
obs
t
(Act) is obtained by taking the
minimum possibility value among the values result-
ing from the subtraction of the maximum value in
the distribution (since it can be not normalized, i.e. at
least one value at 1) with the observation possibilities
π
obs
t
(a) of the actions a not in Act (a Act).
By obtaining the possibility and necessity mea-
sures for each action, we can then select the most pos-
sible observed action a
t
that can explain the changes
in the environment’s state, described by the obser-
vation obs
t
, resulting from the realization of an ac-
tion at time t. An observed action at time t, de-
noted by a
t
, is obtained by selecting the most pos-
sible and necessary action a A according to the
Π
obs
t
(a) and N
obs
t
(a) values. If there is more than
one most possible action, the least common subsumer
action, according to the action subsumption relation,
of this action subset is selected as the observed ac-
tion a
t
. For instance, if the most possible actions are
OpenTap, OpenColdTap and OpenHotTap, then the
OpenTap action is selected since it subsumes both
OpenColdTap and OpenHotTap. The new observed
action a
t
is sent to the behaviour recognition agent,
which uses the sequence of observed actions to in-
fer behaviour hypotheses concerning the realization
of the patient’s activities.
Such activities are defined as plan structures,
which consist of a planned sequence of actions that
allows to accomplish the activity’s goals.
Definition 2.3 (Activity). An activity α is a tuple
(Act
α
,
α
,C
rel
α
,π
rel
α
), where Act
α
A is the activ-
ity’s set of actions, which is partially ordered by a se-
quence relation
α
Act
α
× Act
α
× T × T , where T
represents a set of time values, C
rel
α
is the set of pos-
sible contexts related to the activity realization, and
π
rel
α
is the possibility distribution that a context is re-
lated to the execution of the activity.
The use of time values allow us to describe the
minimum and maximum delays between the real-
ization of two actions. So, the relation, which
is transitive, can be seen as an ordering relation-
ship with temporal constraints between two actions
in the activity plan. For instance, the activity
WatchTv can have an activity plan composed of the
actions SitOnCouch, OpenTv and CloseTv and the
sequence relations (SitOnCouch, OpenTv, 0, 5) and
(OpenTv, CloseTv, 5, 480) (do not watch tv for more
than 8 hours) , where the time values are in minutes.
By using the observation obs
t
, we evaluate, for
each activity plan α in the plan library P , the pos-
sibility value that the current observed environment’s
state is related to the realization of an activity α. The
activity realization possibility distribution is obtained
by taking, for each activity plan α P , the maxi-
mum possibility value among the context possibilities
π
rel
α
(c
i
) for the contexts c
i
C
rel
α
entailed by the ob-
servation obs
t
.
As previously mentioned, the most possible action
a
t
that could explain the changes in the environment’s
state according to the observation obs
t
resulting from
an action realization is sent to the behaviour recog-
nition agent, which uses the sequences of observed
actions to generate hypotheses concerning the be-
haviour of the patient when he performs some activi-
ties. This sequence of observed actions forms an ob-
served plan P
obs
t
, which consists to a totally ordered
set (a
1
,...,a
i
,...,a
t
), where each a
i
is the most possi-
ble and necessary observed action for the observation
obs
i
. For instance, the observed plan ((OpenDoor,t =
0,3), (EnterKitchen,t = 1,4)) indicates that for obs
0
,
the OpenDoor action was observed at a timestamp of
3 minutes after the start of the recognition process,
and for obs
1
, the EnterKitchen action was observed
one minute later (timestamp of 4 minutes).
Since the current observed behaviour can contain
partial or complete coherent realizations of some ac-
tivity plans, we must define the notion of partial ex-
ICAART 2010 - 2nd International Conference on Agents and Artificial Intelligence
186
ecution path. A partial execution path Path
Exe
j
for
an activity plan α is a subset of the observed plan
P
obs
t
, where each observed action in the partial path
is associated to an action in the activity plan α. Also,
the observed actions in the partial path Path
Exe
j
must
represent a coherent realization of a part of the activ-
ity plan, where the sequence and temporal constraints
defined in the activity plan must be respected accord-
ing to the observed actions in the partial path. For
instance, for the observation plan ((SitOnCouch,t =
0,4), (OpenElectricalAppliance,t = 1,5)), possi-
ble partial paths for the WatchTv activity plan could
be the SitOnCouch action only or the SitOnCouch
action followed by the OpenElectricalAppliance
action (since OpenElectricalAppliance subsumes
OpenTv).
At each new observed action a
t
added to the ob-
served plan P
obs
t
, the set of partial execution paths
Path
Exe
is updated by extending, removing, or adding
partial paths. A partial path can be extended ifthe new
observed action a
t
subsumes one of the next possible
actions in the activity plan and if the extended partial
path respects the constraints in the activity plan. If
we can extend a partial path, we must keep a copy of
the original partial path, since the new observed ac-
tion could be not associated to the realization of the
partial path’s activity plan. A partial path is removed
if the maximum delays for the next possible action in
the activity plan are exceeded. A partial path is added
is the observed action a
t
subsumes one of the first ac-
tions in the activity plan.
The set of partial execution paths Path
Exe
is then
used to generate behavioural hypotheses B , according
to the observed plan P
obs
t
, concerning the observed
behaviour of the patient when he realize some activi-
ties. A behaviour hypothesis b B for an observed
plan P
obs
t
is a subset of the partial execution path
set Path
Exe
that respects the following conditions: (i)
each partial path is associated with a different activ-
ity, (ii) some observed actions can be shared between
partial paths, (iii) each partial path must at least have
one action that is not shared. It should be noted that it
is possible that some observed actions in the observed
plan are not in the partial paths.
A behaviour hypothesis is normal, denoted by b
N
,
when each observed action in the observed plan is as-
sociated to at least one partial path. A normal be-
haviour represents a coherent realization, which can
be partial or complete, of some activities by the pa-
tient. A behaviour hypothesis is erroneous, denoted
by b
E
, when some observed actions in the observed
plan are not associated to a partial path. An erroneous
behaviour represents an erroneousrealization of some
activities, while some others activities can still be car-
ried out in a coherent way.
From this point, the behaviour recognition agent
has determined the sets of plausible normal and er-
roneous hypotheses, B
N
and B
E
, concerning the be-
haviour of the observed patient. In order to circum-
scribe the behaviour hypothesis set before sending
theses hypotheses to an assistance agent, the possi-
bility and necessity measures concerning the obser-
vation of each behaviour must be evaluated. Such
measures are obtained from the behaviour possibil-
ity distribution, which also need the partial execution
path possibility. The partial execution path possibility
distribution at time t, π
Exe
t
, is obtained by selecting,
for each partial path p Path
Exe
, the maximum val-
ues between the minimum action prediction possibil-
ity among the next possible actions and the minimum
value among the action observation and activity pos-
sibilities for each observed action in the partial path.
This partial path possibility distribution π
Exe
t
is then
used to evaluate the behaviour possibility distribution
π
bev
t
. The behaviour possibility distribution π
bev
t
is
obtained by selecting, for each behaviour hypothe-
sis b B , the maximum possibility value between
the minimum partial path possibility for the partial
paths of the behaviour, the minimum action observa-
tion possibility for the observed actions in the partial
paths of the hypothesis, and the minimum action ob-
servation possibility for the observed actions not in
the partial paths of the hypothesis.
The behaviour possibility distribution π
bev
t
allows
us to evaluate the possibility and necessity measures
of behaviour observation, Π
bev
t
and N
bev
t
. Those
measures, which allow us to indicate the possibility
Π
bev
t
(Bev) and necessity N
bev
t
(Bev) that a behaviour
b in a subset Bev B is the behaviour of the observed
patient according to the observed plan P
obs
t
, are given
by:
Π
bev
t
(Bev) = max
{bBev}
(π
bev
t
(b)), (4)
N
bev
t
(Bev) = max
{cB }
(π
bev
t
(c)) Π
bev
t
(Bev) (5)
= min
{b/Bev}
max
{cB }
(π
bev
t
(c)) π
bev
t
(b)
. (6)
Π
bev
t
(Bev) is obtained by selecting the maximum be-
haviour possibility among the behaviours b in the be-
haviour subset Bev B . N
bev
t
(Bev) is obtained by
selecting the minimum possibility among the values
resulting from the subtraction of the maximum pos-
sibility in the distribution with the behaviour possi-
bilities π
bev
t
(b) of the behaviour hypotheses b not in
Bev (b Bev). This allows to represent an interval
of confidence [N
bev
t
(Bev), Π
bev
t
(Bev)] concerning the
possibility that a hypothesis behaviour b Bev is the
POSSIBILISTIC ACTIVITY RECOGNITION
187
observed behaviour of the patient according to the ob-
served plan P
obs
t
. So, after each observation obs
t
, the
behaviour recognition agent selects the most possible
and necessary behaviour hypotheses and sends them
to an assistance agent, which will use it to plan an
assistive task if needed.
By using the formal tools previously presented,
we can formulate the Algorithms 1 and 2, which de-
scribe the principal steps in the recognition process.
Algorithm 1 Action recognition.
Input:
obs
t
, obs
t1
previous and current observations
π
pre
t1
previous action prediction distribution
C context set
C
t1
previous entailed contexts
A , P action and plan libraries
Output:
a
t
current recognized observed action
π
pre
t
, π
rec
t
, π
obs
t
, π
rel
t
current action prediction, action
recognition, action observation, and activity possibility
distributions
1: C
t
evaluateEntailedContexts(C ,obs
t
)
2: π
pre
t
evaluateActionPrediction(A , C
t
)
3: π
rec
t
evaluateActionRecognition(A , C
t
, C
t1
)
4: π
obs
t
observationAddOperator(π
pre
t1
, π
rec
t
)
5: a
t
selectObservedAction(A , π
obs
t
)
6: π
rel
t
evaluateActivityRelated(P , C
entail
)
To recognize the behaviour of the observed patient
after the realization of an action at a time t, the recog-
nition agent uses the environmental observations obs
t
,
to generate behavioural hypotheses that could explain
the sequence of t observed actions. According to the
Algorithm 1, the contextsC
t1
andC
t
that are entailed
by the previous and current observations obs
t1
and
obs
t
are used to evaluate the action observation possi-
bility distribution π
obs
t
on the action library A by us-
ing the observation addition operator
obs
on the pre-
vious action prediction possibility distribution π
pre
t1
and the action current recognition possibility distri-
bution π
rec
t
. This action observation possibility dis-
tribution π
obs
t
is then used to evaluate the action ob-
servation possibility and necessity measures Π
obs
t
and
N
obs
t
, which are used, in conjunction with the action
subsumption relation, to select the most possible and
necessary observed action a
t
. Also, the activity pos-
sibility distribution π
rel
t
on the activity plan library
P , which indicates the possibility that the observed
environment’s state described in obs
t
is related to a
specific activity realization, is evaluated.
According to the Algorithm 2, the observed plan
P
obs
t
, which include the new observed action a
t
, is
used to generate a set of hypotheses B concerning
the observed behaviour of the patient. The observed
plan P
obs
t
is used to update the set of partial execu-
Algorithm 2 Behaviour recognition.
Input:
a
t
current recognized action observed
P
obs
t1
previous observed plan
A , P action and plan libraries
π
pre
, π
obs
, π
rel
sets of possibility distributions
Path
Exe
partial execution path set
Output:
P
obs
t
current observed plan
Path
Exe
updated partial path set
B current behaviour hypotheses
π
bev
t
current behaviour possibility distribution
B
t
set of most possible behaviour hypotheses
1: P
obs
t
appendObservedAction(a
t
, P
obs
t1
)
2: Path
Exe
updatePartialPathSet(Path
Exe
,P , P
obs
t
)
3: B generateBehaviourHypotheses(PathExe, P
obs
t
)
4: π
Exe
t
evalPartialPath(Path
Exe
,P
obs
t
, π
pre
, π
obs
, π
rel
)
5: π
bev
t
evaluateBehaviourPossibility(B , π
Exe
t
, π
obs
)
6: B
t
selectBehaviourHypotheses(B ,π
bev
t
)
tion paths Path
Exe
, where each partial path is a par-
tial (or complete) coherent realization of an activity
plan. The set of behaviour hypotheses B is obtained
by selecting subsets of Path
Exe
that respect the con-
ditions in order to be a behaviour hypothesis. Each
behaviour hypothesis b B can be a coherent real-
ization of some activities (b B
N
) or an erroneous
realization of some activities (b B
E
), according to
its partial path subset and the observed plan. The be-
haviour possibility distribution π
bev
t
is then evaluated
by using the previous defined possibility distributions
(π
pre
, π
obs
, π
rel
) and the partial execution path pos-
sibility distribution π
Exe
t
. This behaviour possibility
distribution π
bev
t
allows us to rankthe set of behaviour
hypotheses B according to the behaviour possibility
and necessity measures Π
bev
t
and N
bev
t
. The recogni-
tion agent sends the most possible behaviour hypothe-
ses B
t
to an assistance agent, which plans an assis-
tance task if needed.
3 RELATED WORK
A number of researchers have investigated activity
recognition as plan recognition. Logical based ap-
proaches (Kautz, 1991) define a theory using first–
order logic, in order to formalize the recognition
activity into an inference process. But to allevi-
ate to the equiprobability problem of logical models,
where an hypothesis cannot be privileged within the
set of possible activities, probabilistic models (Liao
et al., 2004; Philipose et al., 2004), mainly Bayesian
or Markovian based, or hybrid models (Avrahami-
Zilberbrand and Kaminka, 2007; Geib, 2007; Roy
et al., 2009), that use logical and probabilistic reason-
ICAART 2010 - 2nd International Conference on Agents and Artificial Intelligence
188
ing, were proposed. The limit of the vast majority of
these previous approaches is that they were focused
exclusively on the concept of probability where the
inference itself requires large numbers of prior and
conditional probabilities. For example, in the context
of assistive cognition within smart homes, requiring
humans to specify the habitat’s object involvement
probabilities is time consuming and difficult when
we consider all the potential objects involved in each
stage of an activity, given the large numbers of activ-
ities performed. Moreover, the probabilities do not
allow us to represent complete ignorance; besides,
there are numerous situations where it is not possi-
ble to give the agent probabilities based on statistical
measures, but only qualitative information provided
by expertsor deduced frompreviousexperiences. Our
proposed model, by using possibility theory, allows to
mitigate those limitations by taking into account par-
tial belief and by handling the behaviour hypotheses
as a partially ordered set.
4 CONCLUSIONS
This paper has presented a formal framework of ac-
tivities recognition based on possibilistic DL as the
semantic model of the observed agent’s behaviour.
This framework constitutes a first step toward a more
expressive ambient agent recognizer, which will fa-
cilitate to support the fuzzy and uncertainty con-
straints inherently to the smart environment. Cur-
rently, the proposed is under implementation in the
software framework of our smart home infrastructure,
which consists of a standard apartment with a kitchen,
living room, dining room, bedroom, and bathroom,
equipped with multiple sensor devices. Moreover, the
next logical step consists in conducting an extension
of this framework inorder to simultaneously dealwith
the vagueness of an activity’s duration and the noises
of the sensors. Finally, we clearly believe that consid-
erable future work and large scale experimentations
will be necessary, in a more advanced stage of our
work, to help evaluate the effectiveness of the model
in the field.
REFERENCES
Augusto, J. C. and Nugent, C. D., editors (2006). Design-
ing Smart Homes: The Role of Artificial Intelligence,
volume 4008 of LNAI. Springer.
Avrahami-Zilberbrand, D. and Kaminka, G. A. (2007).
Utility-based plan recognition: an extended abstract.
In Proc. of AAMAS’07, pages 858–860.
Baader, F., Calvanese, D., McGuinness, D. L., Nardi, D.,
and Patel-Schneider, P. F., editors (2007). The De-
scription Logic Handbook: Theory, Implementation,
and Applications. Cambridge University Press, sec-
ond edition.
Casas, R., Marín, R. B., Robinet, A., Delgado, A. R., Yarza,
A. R., Mcginn, J., Picking, R., and Grout, V. (2008).
User modelling in ambient intelligence for elderly and
disabled people. In Proc. of the 11th ICCHP, number
5105 in LNCS. Springer-Verlag.
Dubois, D. and Prade, H. (1988). Possibility Theory: An
Approach to Computerized Processing of Uncertainty.
Plenum Press.
Geib, C. (2007). Plan recognition. In Kott, A. and McE-
neaney, W. M., editors, Adversarial Reasoning: Com-
putational Approaches to Reading the Opponent’s
Mind, pages 77–100. Chapman & Hall/CRC.
Kautz, H. A. (1991). A formal theory of plan recognition
and its implementation. In Allen, J. F., Kautz, H. A.,
Pelavin, R. N., and Tenenberg, J. D., editors, Reason-
ing About Plans, chapter 2, pages 69–126. Morgan
Kaufmann.
Liao, L., Fox, D., and Kautz, H. (2004). Learning and infer-
ring transportation routines. In Proc. of the AAAI’04,
pages 348–353.
Mihailidis, A., Boger, J., Canido, M., and Hoey, J. (2007).
The use of an intelligent prompting system for peo-
ple with dementia: A case study. ACM Interactions,
14(4):34–37.
Philipose, M., Fishkin, K. P., Perkowitz, M., Patterson,
D. J., Fox, D., Kautz, H., and Hähnel, D. (2004). In-
ferring activities from interactions with objects. IEEE
Pervasive Computing: Mobile and Ubiquitous Sys-
tems, 3(4):50–57.
Pollack, M. E. (2005). Intelligent technology for an aging
population: The use of AI to assist elders with cogni-
tive impairment. AI Magazine, 26(2):9–24.
Roy, P., Bouchard, B., Bouzouane, A., and Giroux,
S. (2009). A hybrid plan recognition model
for Alzheimer’s patients: Interleaved–erroneous
dilemma. Web Intelligence and Agent Systems: An
International Journal, 7(4):375–397.
Zadeh, L. A. (1978). Fuzzy sets as a basis for a theory of
possibility. Fuzzy Sets and Systems, 1(1):3–28.
POSSIBILISTIC ACTIVITY RECOGNITION
189