
stipulating all the exceptions explicitly. It is also
widely acknowledged that defaults appear in many
models about the world – for example, we likely fre-
quently invoke causal completeness (Reiter, 1991) to
reason about the physical world. Roughly, this means
that a reasonable number of conditions capture the
preconditions and the effects of actions, and those not
mentioned are not relevant for the task at hand. It
would be interesting to see whether our proposal here
could allow the learning of such default theories, both
in a static as well as a dynamic setting.
REFERENCES
Bacchus, F., Grove, A. J., Halpern, J. Y., and Koller, D.
(1996). From statistical knowledge bases to degrees
of belief. Artificial intelligence, 87(1-2):75–143.
Boutilier, C. (1994). Unifying default reasoning and belief
revision in a modal framework. Artificial Intelligence,
68(1):33–85.
Bueff, A. and Belle, V. (2024). Learning explanatory logi-
cal rules in non-linear domains: a neuro-symbolic ap-
proach. Machine Learning, pages 1–36.
Carlson, A., Betteridge, J., Kisiel, B., Settles, B., Hr-
uschka Jr, E. R., and Mitchell, T. M. (2010). Toward
an architecture for never-ending language learning. In
AAAI, volume 5, page 3.
De Raedt, L., Dries, A., Thon, I., Van den Broeck, G., and
Verbeke, M. (2015). Inducing probabilistic relational
rules from probabilistic examples. In Proceedings of
24th international joint conference on artificial intel-
ligence (IJCAI), pages 1835–1842.
De Raedt, L., Kimmig, A., and Toivonen, H. (2007).
Problog: A probabilistic prolog and its application in
link discovery.
Denecker, M., Marek, V. W., and Truszczynski, M. (2000).
Uniform semantic treatment of default and autoepis-
temic logic. In Proc. KR, pages 74–84.
Dimopoulos, Y. and Kakas, A. (1995). Learning non-
monotonic logic programs: Learning exceptions. In
Machine Learning: ECML-95: 8th European Confer-
ence on Machine Learning Heraclion, Crete, Greece,
April 25–27, 1995 Proceedings 8, pages 122–137.
Springer.
Etherington, D. W. (1987). Relating default logic and cir-
cumscription. In Proceedings of the 10th international
joint conference on Artificial intelligence - Volume 1,
pages 489–494, San Francisco, CA, USA. Morgan
Kaufmann Publishers Inc.
Grosof, B. N. (1992). Representing and reasoning with de-
faults for learning agents. In Proceedings of the ML92
Workshop on Biases in Inductive Learning. Proceed-
ings Available from the Workshop Chair, Diana Gor-
don: Naval Research Laboratory, Washington, DC,
volume 20375.
Halpern, J. (1997). A Critical Reexamination of De-
fault Logic, Autoepistemic Logic, and Only Knowing.
Computational Intelligence, 13(1):144–163.
Kok, S. and Domingos, P. (2007). Statistical predicate in-
vention. In Proceedings of the 24th international con-
ference on Machine learning, pages 433–440. ACM.
Kok, S. and Domingos, P. (2010). Learning markov logic
networks using structural motifs. In ICML, pages
551–558.
Lakemeyer, G. and Levesque, H. J. (2006). Towards an
axiom system for default logic. In Proc. AAAI, pages
263–268.
McCarthy, J. (1980). Circumscription–a form of non-
monotonic reasoning. Artificial intelligence, 13(1-
2):27–39.
McCarthy, J. and Hayes, P. J. (1969). Some philosophical
problems from the standpoint of artificial intelligence.
In Machine Intelligence, pages 463–502.
Morgenstern, L. and McIlraith, S. A. (2011). John Mc-
Carthy’s legacy. Artificial Intelligence, 175(1):1 – 24.
Muggleton, S., De Raedt, L., Poole, D., Bratko, I., Flach,
P., Inoue, K., and Srinivasan, A. (2012). Ilp turns 20.
Machine learning, 86(1):3–23.
Quinlan, J. R. and Cameron-Jones, R. M. (1993). Foil: A
midterm report. In Machine Learning: ECML-93:
European Conference on Machine Learning Vienna,
Austria, April 5–7, 1993 Proceedings 6, pages 1–20.
Springer.
Reiter, R. (1982). Circumscription implies predicate com-
pletion (sometimes). In Proceedings of International
Joint Conference on Artificial Intelligence (IJCAI),
pages 418–420.
Reiter, R. (1991). The frame problem in the situation calcu-
lus: a simple solution (sometimes) and a completeness
result for goal regression. In Artificial intelligence and
mathematical theory of computation: Papers in honor
of John McCarthy, pages 359–380. Academic Press.
Sakama, C. (2005). Ordering default theories and non-
monotonic logic programs. Theoretical Computer Sci-
ence, 338(1-3):127–152.
Schuurmans, D. and Greiner, R. (1994). Learning
default concepts. In Proceedings Of The Bien-
nial Conference-Canadian Society For Computational
Studies Of Intelligence, pages 99–106.
Shakerin, F., Salazar, E., and Gupta, G. (2017). A new al-
gorithm to automate inductive learning of default the-
ories. Theory and Practice of Logic Programming,
17(5-6):1010–1026.
Valiant, L. (2013). Probably approximately correct: Na-
ture’s algorithms for learning and prospering in a com-
plex world.
Valiant, L. G. (1999). Robust logics. In Proceedings of
the thirty-first annual ACM symposium on Theory of
Computing, pages 642–651.
Wang, H. and Gupta, G. (2022). Fold-r++: a scalable
toolset for automated inductive learning of default the-
ories from mixed data. In International Symposium on
Functional and Logic Programming, pages 224–242.
Springer.
ICAART 2025 - 17th International Conference on Agents and Artificial Intelligence
900