Authors:
Alexander Smirnov
;
Anton Agafonov
and
Nikolay Shilov
Affiliation:
SPC RAS, 14th Line, 39, St. Petersburg, Russia
Keyword(s):
Neuro-Symbolic Artificial Intelligence, Deep Neural Networks, Machine Learning, Concept Extraction, Post-Hoc Explanation, Trust Assessment, Enterprise Model Classification.
Abstract:
Neural network-based enterprise modelling support is becoming popular. However, in practical enterprise modelling scenarios, the quantity of accessible data proves inadequate for efficient training of deep neural networks. A strategy to solve this problem can involve integrating symbolic knowledge to neural networks. In previous publications, it was shown that this strategy is useful, but the trust issue was not considered. The paper is aimed to analyse if the trained neural-symbolic models just “learn” the samples better or rely on the meaningful indicators for enterprise model classification. The post-hoc explanation (specifically, the concept extraction) has been used as the studying technique. The experimental results showed that embedding symbolic knowledge does not only improve the learning capabilities but also increases the trustworthiness of the trained machine learning models for enterprise model classification.