poral probabilistic dependencies. However, there are
some important differences to our approach. Except
for a preliminary treatment in (Boutilier and Poole,
1996), all the literature concerns only fully observ-
able MDPs while our approach represents POMDPs.
There is no notion of a precondition of an action in
(Boutillier et al., 1999). In our approach, we can de-
fine states for which certain actions in a future plan-
ning extension should not even be considered for se-
lection. While we inherit the representational and in-
ferential solution to the frame problem from the solu-
tion in the standard fluent calculus, using only 2TBN
as in (Boutillier et al., 1999) one must explicitly as-
sert that fluents unaffected by a specific action persist
in value, although the representational frame problem
(but not the inferential one) can be solved by auto-
mated assertions (Boutilier and Goldszmidt, 1996).
Our approach should be extended in future work
to allow for planning under uncertainty. To construct
plans which achieve a specific condition with a prob-
ability above a threshold, we could apply conditional
planning as defined in (Thielscher, 2005b) or use it-
erative planning with loops in the sense of (Levesque,
2005). It would also be possible to give plan skele-
tons in FLUX similar to (Grosskreutz and Lakemeyer,
2000), which can drastically reduce planning time.
The verification that a plan satisfies a goal with some
probability threshold can then be inferred efficiently
with our approach.
For additional future work, we intend to investi-
gate to which extent we can avoid grounding a first-
order knowledge state as much as possible and use a
first oder algorithm to query our Bayesian networks
(de Salvo Braz et al., 2007).
REFERENCES
Bacchus, F., Halpern, J., and Levesque, H. (1999). Reason-
ing about noisy sensors and effectors in the situation
calculus. Artificial Intelligence, 111(1–2):171–208.
Baier, J. A. and Pinto, J. (2003). Planning under uncer-
tainty as Golog programs. J. Exp. Theor. Artif. Intell.,
15(4):383–405.
Boutilier, C. and Goldszmidt, M. (1996). The frame prob-
lem and Bayesian network action representations. In
Proceedings of the Canadian Conference on Artificial
Intelligence (CSCSI).
Boutilier, C. and Poole, D. (1996). Computing optimal poli-
cies for partially observable decision processes using
compact representations. In Proceedings of the 13-th
National Conference on Artificial Intelligence (AAAI),
pages 1168–1175, Portland, Oregon, USA.
Boutillier, C., Dean, T., and Hanks, S. (1999). Decision-
Theoretic Planning: Structural Assumptions and
Computational Leverage. Journal of Artificial Intel-
ligence Research, 11:1–94.
de Salvo Braz, R., Amir, E., and Roth, D. (2007). Lifted
first-order probabilistic inference. In Getoor, L. and
Taskar, B., editors, Introduction to Statistical Rela-
tional Learning. MIT Press.
Gardiol, N. H. and Kaelbling, L. P. (2004). Envelope-based
planning in relational MDPs. In Advances in Neural
Information Processing Systems 16 (NIPS-03), Van-
couver, CA.
Grosskreutz, H. and Lakemeyer, G. (2000). Turning high-
level plans into robot programs in uncertain domains.
In Proceedings of the European Conference on Artifi-
cial Intelligence (ECAI).
Jin, Y. and Thielscher, M. (2004). Representing beliefs in
the fluent calculus. In Proceedings of the European
Conference on Artificial Intelligence (ECAI), pages
823–827, Valencia, Spain. IOS Press.
Kushmerick, N., Hanks, S., and Weld, D. S. (1995). An
algorithm for probabilistic planning. Artificial Intelli-
gence, 76(1-2):239–286.
Levesque, H. (2005). Planning with loops. In Proceedings
of the International Joint Conference on Artificial In-
telligence (IJCAI), Edinburgh, Scotland.
Pearl, J. (1988). Probabilistic Reasoning in Intelligent Sys-
tems: Networks of Plausible Inference. Morgan Kauf-
mann, San Mateo, CA.
Pearl, J. (2000). Causality: Models, Reasoning, and Infer-
ence. Cambridge University Press.
Poole, D. and Zhang, N. L. (2003). Exploiting contextual
independence in probabilistic inference. Journal of
Artificial Intelligence Research, 18:263–313.
Reiter, R. (2001a). Knowledge in Action. MIT Press.
Reiter, R. (2001b). On knowledge-based programming with
sensing in the situation calculus. ACM Transactions
on Computational Logic, 2(4):433–457.
Shanahan, M. and Witkowski, M. (2000). High-level robot
control through logic. In Proceedings of the Inter-
national Workshop on Agent Theories Architectures
and Languages (ATAL), volume 1986 of LNCS, pages
104–121, Boston, MA. Springer.
Thielscher, M. (1999). From situation calculus to fluent cal-
culus: State update axioms as a solution to the infer-
ential frame problem. Artificial Intelligence, 111(1–
2):277–299.
Thielscher, M. (2000). Representing the knowledge of a
robot. In Proceedings of the International Conference
on Principles of Knowledge Representation and Rea-
soning (KR), pages 109–120, Breckenridge, CO. Mor-
gan Kaufmann.
Thielscher, M. (2005a). FLUX: A logic programming
method for reasoning agents. Theory and Practice of
Logic Programming, 5(4–5):533–565.
Thielscher, M. (2005b). Reasoning Robots: The Art and
Science of Programming Robotic Agents, volume 33
of Applied Logic Series. Kluwer.
Tran, N. and Baral, C. (2004). Encoding probabilistic causal
model in probabilistic action language. In Proceed-
ings of the 19-th National Conference on Artificial In-
telligence (AAAI), pages 305–310.
ICAART 2010 - 2nd International Conference on Agents and Artificial Intelligence
304