chard, B., Raja, C., and Francisco, H., (2020). Explain-
able Artificial Intelligence (XAI): Concepts, taxono-
mies, opportunities and challenges toward responsible
AI, Information Fusion, volume 58, pages 82-115.
Bahdanau, D., Cho, K., and Bengio, Y., (2015). Neural ma-
chine translation by jointly learning to align and trans-
late, in ICLR, San Diego, CA, USA.
Bratko I., (1990). Prolog Programming for Artificial Intel-
ligence. 2nd ed., Addison-Wesley Publishing Company,
USA, pages 597.
Brockman, G., Cheung, V., Pettersson, L., Schneider,
Schulman, J., Tang, J., J., and Zaremba, W., (2016).
Openai gym, arXiv preprint arXiv: 1606.01540.
Cingillioglu, N. and Russo, A., (2018). DeepLogic: To-
wards end-to-end differentiable logical reasoning,
arXiv preprint arXiv: 1805.07433.
Cohen, W., (2016). Tensorlog: A differentiable deductive
database, arXiv preprint arXiv: 1605.06523.
Coppens, Y., Efthymiadis, K., Lenaerts, T., Nowe, A., Mil-
ler, T., Weber, R., and Magazzeni, D., (2019). Distilling
deep reinforcement learning policies in soft decision
trees, in Proc. of the IJCAI 2019 Workshop on Explain-
able Artificial Intelligence, pages 1-6.
Dong, H., Mao, J., Lin, T., Wang, C., Li, L., and Zhou, D.,
(2019). Neural logic machines, in Proc. of International
Conference on Learning Representations, New Orle-
ans, Louisiana, USA.
Fukuchi, Y., Osawa, M., Yamakawa, H., and Imai, M.,
(2017). Autonomous selfexplanation of behavior for in-
teractive reinforcement learning agents, in Proc. of the
5th International Conference on Human Agent Interac-
tion - HAI ’17. ACM Press.
Goodfellow, I., Warde-Farley, D., Mirza, M., Courville, A.,
and Bengio, Y., (2013). Maxout networks, in Proc. of
the 30th International Conference on Machine Learn-
ing, Atlanta, Georgia, USA.
Hado, H., Arthur, G., and David, S., (2016). Deep rein-
forcement learning with double Q-learning, Thirtieth
AAAI Conference on Artificial Intelligence, volume 30,
number 1.
Hochreiter, S. and Schmidhuber, J., (1997). Long short-
term memory, Neural Computation, volume 9, number
8, pages 1735-1780.
Honda, H. and Hagiwara, M., (2019). Question answering
systems with deep learning-based symbolic processing,
in IEEE Access, volume 7, pages 152368-152378.
Honda, H. and Hagiwara, M., (2021). Analogical Reason-
ing With Deep Learning-Based Symbolic Processing,
in IEEE Access, volume 9, pages 121859-121870.
Kingma, D. and Ba, J., (2014). Adam: A method for sto-
chastic optimization, arXiv preprint arXiv: 1412.6980
.
Lee, J. H., (2019). Complementary reinforcement learning
towards explainable agents, arXiv preprint arXiv:
1901.00188.
Likert, R., (1932). A technique for the measurement of atti-
tudes, Archives of Psychology, volume 140, number 55.
Lipton, Z.C., (2018). The mythos of model interpretability,
Communications of the ACM, volume 61, number 10,
pages 36-43.
Madumal, P., Miller, T., Sonenberg, L., and Vetere, F.,
(2019). Explainable reinforcement learning through a
causal lens, arXiv preprint arXiv: 1905.10958.
Minervini, P., Bosnjak M., Rocktschel, T., and Riedel, S.,
(2018). Towards neural theorem proving at scale, arXiv
preprint arXiv: 1807.08204.
Minervini, P., Riedel, S., Stenetorp, P., Grefenstette, E., and
Rocktäschel, T., (2020). Learning Reasoning Strategies
in End-to-End Differentiable Proving, in Proc. of the
37th International Conference on Machine Learning,
PMLR 119, pages 6938-6949.
Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Anto-
noglou, I., Wierstra, D., and Riedmiller, M. (2013).
Playing Atari with deep reinforcement learning, arXiv
preprint arXiv:1312.5602.
Montavon, G., Samek, W., and Muller, K.R., (2018) Meth-
ods for interpreting and understanding deep neural net-
works, Digital Signal Processing, volume 73, pages 1-
15.
Osgood, C. E., Suci, G., and Tannenbaum, P., (1957). The
measurement of meaning, Urbana, IL: University of Il-
linois Press.
Osgood, C. E., May, W. H., and Miron, M. S., (1975).
Cross-Cultural Universals of Affective Meaning, Ur-
bana, IL: University of Illinois Press.
Rocktaschel, T. and Riedel, S., (2017). End-to-end differ-
entiable proving, in Proc. of the NIPS 30, pages 3788-
3800.
Sequeira, P. and Gervasio, M., (2019). Interestingness ele-
ments for explainable reinforcement learning: Under-
standing agents, capabilities, and limitations, arXiv pre-
print arXiv: 1912.09007.
Serani, L. and d'Avila Garcez, A. S., (2016). Logic tensor
networks: Deep learning and logical reasoning from
data and knowledge, in Proc. of the 11th International
Workshop on Neural-Symbolic Learning and Reason-
ing (NeSy’16) co-located with the Joint Multi-Confer-
ence on Human-Level Artificial Intelligence (HLAI
2016), New York City, NY, USA.
Sourek, G., Aschenbrenner, V., Zelezny, F., and Kuzelka,
O., (2015). Lifted relational neural networks, in Proc.
of the NIPS Workshop on Cognitive Computation: Inte-
grating Neural and Symbolic Approaches co-located
with the NIPS 29, Montreal, Canada.
Vaswani,A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones,
L., Gomez, A., Kaiser, L., and Polosukhi, I., (2017). At-
tention Is All You Need, in Proc. of the NIPS 31
, pages
5998–6008.
Waa, J., Diggelen, J., Bosch, K., and Neerincx, M., (2018).
Contrastive explanations for reinforcement learning in
terms of expected consequences, IJCAI-18 Workshop
on Explainable AI (XAI), volume 37.
Deep-Learning-based Fuzzy Symbolic Processing with Agents Capable of Knowledge Communication