this variant is not ready for practical use yet.
5 CONCLUSION
Our method is neural-symbolic but its results are
derivations, and they are precise and explainable. If
some neural predicate representations are not reliable,
this has effect on search speed only. Our best-first
strategy is expected to work best for derivations of
goals that are sets of ground literals. Fortunately, the
primary purpose of KBs and logic programs unlike
theorem provers is to derive facts that are expressible
by ground literals. As the examples have shown, our
best-first strategy may dramatically reduce backtrack-
ing for typical derivation tasks. Nonetheless, further
experiments are necessary in order to get quantitative
results for benchmark KBs.
The effect of our strategy on the speed of inference
depends on the accuracy of neural predicate represen-
tations. As of the time of writing this paper, evalua-
tions of the accuracy of NN representations of predi-
cates are sparse. Algorithm 2 is presented in a general
form. It can be easily adjusted to various knowledge
representation formats including those deviating from
FOL.
Our best-first search strategy can be applied to
any resolution method, not just to backward chain-
ing, but Algorithm 2 will not give positive results
for resolvents containing literals whose negations are
not derivable. This strategy could be extended onto
FOL with equality. In this case, the selection of re-
solvents should be additionally done for paramodula-
tion or other other rules enabling derivations in FOL
with equality (Chang and Lee, 1973). Our objective
function can be directly applied to the paramodulation
rule.
Any heuristic search strategy would be especially
beneficial for inductive theories because the induction
rule is the source of infinite branching (Hutter, 1997)
but adaptation of the objective function to the induc-
tion rule is problematic. Nonetheless, our objective
function can used for the induction rule limited to lit-
erals (Sakharov, 2020). If the selection does not yield
a positive result, then the induction rule should be
abandoned for the respective derivation step.
REFERENCES
Chang, C.-L. and Lee, R. C.-T. (1973). Symbolic logic and
mechanical theorem proving. Academic press.
Cingillioglu, N. and Russo, A. (2019). DeepLogic: To-
wards end-to-end differentiable logical reasoning. In
AAAI 2019 Spring Symposium on Combining Machine
Learning with Knowledge Engineering, volume 2350.
CEUR-WS.org.
Cohen, W., Yang, F., and Mazaitis, K. R. (2020). Ten-
sorlog: A probabilistic database implemented using
deep-learning infrastructure. Journal of Artificial In-
telligence Research, 67:285–325.
d’Avila Garcez, A. S., Gori, M., Lamb, L. C., Serafini,
L., Spranger, M., and Tran, S. N. (2019). Neural-
symbolic computing: An effective methodology for
principled integration of machine learning and reason-
ing. Journal of Applied Logics, 6(4):611–632.
Dong, H., Mao, J., Lin, T., Wang, C., Li, L., and Zhou,
D. (2019). Neural logic machines. In International
Conference on Learning Representations.
Hutter, D. (1997). Coloring terms to control equational rea-
soning. Journal of Automated Reasoning, 18(3):399–
442.
Kimmig, A., Demoen, B., De Raedt, L., Costa, V. S., and
Rocha, R. (2010). On the implementation of the prob-
abilistic logic programming language ProbLog. arXiv
preprint arXiv:1006.4442.
Knuth, D. E. (1998). Art of Programming, Volume 2:
Seminumerical Algorithms, volume 2. Addison-
Wesley.
Lamb, L. C., d’Avila Garcez, A. S., Gori, M., Prates, M.
O. R., Avelar, P. H. C., and Vardi, M. Y. (2020). Graph
neural networks meet neural-symbolic computing: A
survey and perspective. In Proceedings of the Twenty-
Ninth International Joint Conference on Artificial In-
telligence, IJCAI 2020, pages 4877–4884.
Manhaeve, R., Dumancic, S., Kimmig, A., Demeester, T.,
and De Raedt, L. (2018). DeepProbLog: Neural prob-
abilistic logic programming. In Advances in Neural
Information Processing Systems, pages 3749–3759.
Marcus, G. (2020). The next decade in AI: four steps
towards robust artificial intelligence. arXiv preprint
arXiv:2002.06177.
Marra, G., Giannini, F., Diligenti, M., and Gori, M. (2019).
Integrating learning and reasoning with deep logic
models. In Joint European Conference on Machine
Learning and Knowledge Discovery in Databases,
pages 517–532.
McCune, W. (2003). Otter 3.3 reference manual and guide.
Technical report, Argonne National Lab.
Minervini, P., Bosnjak, M., Rockt
¨
aschel, T., Riedel, S.,
and Grefenstette, E. (2020). Differentiable reason-
ing on large knowledge bases and natural language.
In Knowledge Graphs for eXplainable Artificial Intel-
ligence: Foundations, Applications and Challenges,
pages 125–142. IOS Press.
Mints, G. (1993). Gentzen-type systems and resolution rule.
II. Predicate logic. In Logic Colloquium’90: ASL
Summer Meeting in Helsinki, pages 163–190. Asso-
ciation for Symbolic Logic.
Nie, X. (1997). Non-Horn clause logic programming. Arti-
ficial Intelligence, 92(1):243 – 258.
ICAART 2021 - 13th International Conference on Agents and Artificial Intelligence
988