Table 6: Enumeration of entire-outcome counterfactual explanations.
Dataset radius min #CFs avg #CFs max #CFs enumtime One
CF (s)
min enumtime (s) avg enumtime (s) max enumtime (s)
YELP Review Analysis
60 1891 2025 6858 ≤ 10
−3
≤ 10
−3
2.29 13.46
180 2601 3203 9693 ≤ 10
−3
0.009 4.5 29.97
Augmented MNIST
150 96 4971 9347 ≤ 10
−3
0.02 15.61 33.27
250 1158 5027 11323 ≤ 10
−3
1.77 15.9 45.36
IMDB Movie Genre Pred 30 5 14 22 ≈ 0 0.13 2.78 7.47
Patient Characteristics (NYS15) 63 134 1052 2399 ≤ 10
−4
0.15 2.83 9.37
Table 7: Enumeration of entire-outcome sufficient reasons explanations.
Dataset radius min #SRs avg #SRs max #SRs enumtime One
SR (s)
min enumtime (s) avg enumtime (s) max enumtime (s)
YELP Review Analysis 60 13116 23167 38620 0.028 10.94 19.37 31.95
Augmented MNIST 150 11292 11956 12621 0.053 12.26 13.06 13.85
IMDB Movie Genre Pred 30 3 41.83 161 0.004 0.003 0.02 0.07
REFERENCES
Boumazouza, R., Cheikh-Alili, F., Mazure, B., and Tabia,
K. (2021). Asteryx: A model-agnostic sat-based ap-
proach for symbolic and score-based explanations.
In Proceedings of the 30th ACM International Con-
ference on Information & Knowledge Management,
pages 120–129.
Chen, S. (2021). Interpretation of multi-label classification
models using shapley values. CoRR, abs/2104.10505.
Ciravegna, G., Giannini, F., Gori, M., Maggini, M., and
Melacci, S. (2020). Human-driven fol explanations
of deep learning. In IJCAI, pages 2234–2240.
Gr
´
egoire,
´
E., Izza, Y., and Lagniez, J.-M. (2018). Boosting
mcses enumeration. In IJCAI, pages 1309–1315.
Ignatiev, A., Morgado, A., and Marques-Silva, J. (2018).
PySAT: A Python toolkit for prototyping with SAT or-
acles. In SAT, pages 428–437.
Ignatiev, A., Narodytska, N., and Marques-Silva, J. (2019a).
Abduction-based explanations for machine learning
models. In Proceedings of the AAAI Conference on
Artificial Intelligence, volume 33, pages 1511–1519.
Ignatiev, A., Narodytska, N., and Marques-Silva, J.
(2019b). On relating explanations and adversarial ex-
amples. In Advances in Neural Information Process-
ing Systems, volume 32.
Lundberg, S. M. and Lee, S.-I. (2017). A unified ap-
proach to interpreting model predictions. In Guyon,
I., Luxburg, U. V., Bengio, S., Wallach, H., Fer-
gus, R., Vishwanathan, S., and Garnett, R., editors,
Advances in Neural Information Processing Systems,
volume 30. Curran Associates, Inc.
Narodytska, N., Kasiviswanathan, S., Ryzhyk, L., Sagiv,
M., and Walsh, T. (2018). Verifying properties of
binarized deep neural networks. In Proceedings of
the AAAI Conference on Artificial Intelligence, vol-
ume 32.
Panigutti, C., Guidotti, R., Monreale, A., and Pedreschi, D.
(2019). Explaining multi-label black-box classifiers
for health applications. In International Workshop on
Health Intelligence, pages 97–110. Springer.
Reiter, R. (1987). A theory of diagnosis from first princi-
ples. Artificial intelligence, 32(1):57–95.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ” why
should i trust you?” explaining the predictions of any
classifier. In Proceedings of the 22nd ACM SIGKDD
2016, pages 1135–1144.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2018). Anchors:
High-precision model-agnostic explanations. In Pro-
ceedings of the AAAI Conference on Artificial Intelli-
gence, volume 32.
Shih, A., Choi, A., and Darwiche, A. (2018). A sym-
bolic approach to explaining bayesian network clas-
sifiers. In IJCAI-18, pages 5103–5111. International
Joint Conferences on Artificial Intelligence Organiza-
tion.
Shih, A., Choi, A., and Darwiche, A. (2019). Compiling
bayesian network classifiers into decision graphs. In
Proceedings of the AAAI-19, volume 33, pages 7966–
7974.
Singla, K. and Biswas, S. (2021). Machine learning explan-
ability method for the multi-label classification model.
In 2021 IEEE 15th International Conference on Se-
mantic Computing (ICSC), pages 337–340. IEEE.
Symbolic Explanations for Multi-Label Classification
349