
tions.
REFERENCES
Adadi, A. and Berrada, M. (2018). Peeking inside the black-
box: A survey on explainable artificial intelligence
(XAI). IEEE Access, 6:52138–52160.
Ansótegui, C., Bonet, M. L., and Levy, J. (2013). SAT-
based MaxSAT algorithms. Artificial Intelligence,
196:77–105.
Arenas, M., Barceló, P., Romero Orth, M., and Suber-
caseaux, B. (2022). On computing probabilistic ex-
planations for decision trees. Advances in Neural In-
formation Processing Systems, 35:28695–28707.
Audemard, G., Bellart, S., Bounia, L., Koriche, F., Lagniez,
J., and Marquis, P. (2021). On the computational in-
telligibility of boolean classifiers. In Proc. of KR’21,
pages 74–86.
Audemard, G., Bellart, S., Bounia, L., Koriche, F., Lagniez,
J., and Marquis, P. (2022a). On preferred abductive
explanations for decision trees and random forests. In
Proc. of IJCAI’22.
Audemard, G., Bellart, S., Bounia, L., Koriche, F., Lagniez,
J.-M., and Marquis, P. (2022b). On the explanatory
power of boolean decision trees. Data & Knowledge
Engineering, 142:102088.
Audemard, G., Bellart, S., Bounia, L., Koriche, F., Lagniez,
J.-M., and Marquis, P. (2022c). Trading complexity
for sparsity in random forest explanations. In Proc. of
AAAI’22.
Audemard, G., Bellart, S., Bounia, L., Lagniez, J.-M., Mar-
quis, P., and Szczepanski, N. (2023). Pyxai : calculer
des explications pour des modèles d’apprentissage su-
pervisé. EGC.
Audemard, G., Koriche, F., and Marquis, P. (2020). On
tractable XAI queries based on compiled representa-
tions. In Proc. of KR’20, pages 838–849.
Azar, A. T., Elshazly, H. I., Hassanien, A. E., and Elko-
rany, A. M. (2014). A random forest classifier for
lymph diseases. Computer Methods and Programs in
Biomedicine, 113.
Bénard, C., Biau, G., Veiga, S. D., and Scornet, E. (2021).
Interpretable random forests via rule extraction. In
Proceedings of the 24th International Conference
on Artificial Intelligence and Statistics, AISTATS’21,
pages 937–945.
Biau, G. (2012). Analysis of a random forests model. Jour-
nal of Machine Learning Research, 13:1063–1095.
Bogomolov, A., Lepri, B., Staiano, J., Oliver, N., Pi-
anesi, F., and Pentland, A. (2014). Once upon a
crime: Towards crime prediction from demograph-
ics and mobile data. In Proceedings of the 16th
International Conference on Multimodal Interaction,
ICMI’14, pages 427–434. ACM.
Bounia, L. and Koriche, F. (2023). Approximating prob-
abilistic explanations via supermodular minimization
(corrected version). In Uncertainty in Artificial Intel-
ligence (UAI 2023)., volume 216, pages 216–225.
Breiman, L. (2001). Random forests. Machine Learning,
45(1):5–32.
Choi, A., Shih, A., Goyanka, A., and Darwiche, A. (2020).
On symbolically encoding the behavior of random
forests. In Proc. of FoMLAS’20, 3rd Workshop on For-
mal Methods for ML-Enabled Autonomous Systems,
Workshop at CAV’20.
Criminisi, A. and Shotton, J. (2013). Decision Forests for
Computer Vision and Medical Image Analysis. Ad-
vances in Computer Vision and Pattern Recognition.
Springer.
Darwiche, A. (1999). Compiling devices into decompos-
able negation normal form. pages 284–289.
Darwiche, A. and Hirth, A. (2020). On the reasons behind
decisions. In Proc. of ECAI’20.
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Gian-
notti, F., and Pedreschi, D. (2019). A survey of meth-
ods for explaining black box models. ACM Computing
Surveys, 51(5):93:1–93:42.
Ignatiev, A. (2019). Rc2: an efficient maxsat solver. J.
Satisf. Boolean Model. Comput.
Ignatiev, A., Narodytska, N., and Marques-Silva, J. (2019).
Abduction-based explanations for machine learning
models. In Proc. of AAAI’19, pages 1511–1519.
Izza, Y., Ignatiev, A., and Marques-Silva, J. (2020). On
explaining decision trees. ArXiv, abs/2010.11034.
Izza, Y. and Marques-Silva, J. (2021). On explaining ran-
dom forests with SAT. In Proc. of IJCAI’21, pages
2584–2591.
Izza, Y., Meel, K. S., and Marques-Silva, J. (2024). Locally-
minimal probabilistic explanations. ArXiv.
Liffiton, M. H. and Sakallah, K. A. (2008). Algorithms
for computing minimal unsatisfiable subsets of con-
straints. Journal of Automated Reasoning, 40:1–33.
Louenas, B. (2023). Modéles formels pour l’IA explicable:
des explications pour les arbres de décision. PhD the-
sis, Université d’Artois.
Louenas, B. (2024). Enhancing the Intelligibility of
Boolean Decision Trees with Concise and Reliable
Probabilistic Explanations. In 20th International Con-
ference on Information Processing and Management
of Uncertainty in Knowledge-Based Systems, Lisbao,
Portugal.
Lundberg, S. and Lee, S.-I. (2017). A unified approach
to interpreting m(ijcaiodel predictions. In Proc. of
NIPS’17, pages 4765–4774.
Marques-Silva, J. (2023). Logic-based explainability in ma-
chine learning. ArXiv, abs/2211.00541.
Marques-Silva, J. and Huang, X. (2023). Explainability is
not a game. Communications of the ACM, 67:66 – 75.
Miller, G. A. (1956). The magical number seven, plus or
minus two: Some limits on our capacity for processing
information. The Psychological Review, 63(2):81–97.
Miller, T. (2019). Explanation in artificial intelligence: In-
sights from the social sciences. Artificial Intelligence,
267:1–38.
Molnar, C. (2019). Interpretable Machine Learning - A
Guide for Making Black Box Models Explainable.
Leanpub.
Computing Improved Explanations for Random Forests: k-Majoritary Reasons
197