
tion process. We also plan to apply our approach
to other types of tree ensembles, such as gradient
boosted trees.
REFERENCES
Asuncion, A., Newman, D., et al. (2007). Uci machine
learning repository.
Audemard, G., Lagniez, J.-M., Marquis, P., and Szczepan-
ski, N. (2023). Computing abductive explanations
for boosted trees. In International Conference on Ar-
tificial Intelligence and Statistics, pages 4699–4711.
PMLR.
Bahar, R. I., Frohm, E. A., Gaona, C. M., Hachtel, G. D.,
Macii, E., Pardo, A., and Somenzi, F. (1997). Alge-
bric decision diagrams and their applications. Formal
methods in system design, 10:171–206.
Borisov, V., Leemann, T., Seßler, K., Haug, J., Pawelczyk,
M., and Kasneci, G. (2022). Deep neural networks and
tabular data: A survey. IEEE Transactions on Neural
Networks and Learning Systems, pages 1–21.
Breiman, L. (2001). Random forests. Mach. Learn.,
45(1):5–32.
Bryant (1986). Graph-based algorithms for boolean func-
tion manipulation. IEEE Transactions on Computers,
C-35(8):677–691.
Friedman, J. H. (2001). Greedy function approximation: a
gradient boosting machine. Annals of statistics, pages
1189–1232.
Gossen, F. and Steffen, B. (2021). Algebraic aggregation
of random forests: towards explainability and rapid
evaluation. International Journal on Software Tools
for Technology Transfer, pages 1–19.
Grinsztajn, L., Oyallon, E., and Varoquaux, G. (2022). Why
do tree-based models still outperform deep learning
on typical tabular data? In NeurIPS.
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Gian-
notti, F., and Pedreschi, D. (2019). A survey of meth-
ods for explaining black box models. ACM Comput.
Surv., 51(5):93:1–93:42.
Ignatiev, A., Izza, Y., Stuckey, P. J., and Marques-Silva, J.
(2022). Using maxsat for efficient explanations of tree
ensembles. In Proceedings of the AAAI Conference on
Artificial Intelligence, volume 36, pages 3776–3785.
Ignatiev, A., Narodytska, N., and Marques-Silva, J. (2019a).
Abduction-based explanations for machine learning
models. In Proceedings of the AAAI Conference on
Artificial Intelligence, volume 33, pages 1511–1519.
Ignatiev, A., Narodytska, N., and Marques-Silva, J.
(2019b). On validating, repairing and refining heuris-
tic ML explanations. CoRR, abs/1907.02509.
Izza, Y., Ignatiev, A., Stuckey, P. J., and Marques-Silva,
J. (2023). Delivering inflated explanations. CoRR,
abs/2306.15272.
Izza, Y. and Marques-Silva, J. (2021). On explaining ran-
dom forests with SAT. In Zhou, Z., editor, Proceed-
ings of the Thirtieth International Joint Conference
on Artificial Intelligence, IJCAI 2021, Virtual Event
/ Montreal, Canada, 19-27 August 2021, pages 2584–
2591. ijcai.org.
Lundberg, S. (2017). A unified approach to interpreting
model predictions. arXiv preprint arXiv:1705.07874.
Marques-Silva, J. (2024). Logic-based explainability: Past,
present & future. CoRR, abs/2406.11873.
Murtovi, A., Bainczyk, A., Nolte, G., Schl
¨
uter, M., and
Steffen, B. (2023). Forest GUMP: a tool for verifi-
cation and explanation. Int. J. Softw. Tools Technol.
Transf., 25(3):287–299.
Murtovi, A., Schl
¨
uter, M., and Steffen, B. (2025a). Com-
puting inflated explanations for boosted trees: A
compilation-based approach. In Hinchey, M. and Stef-
fen, B., editors, The Combined Power of Research,
Education, and Dissemination - Essays Dedicated to
Tiziana Margaria on the Occasion of Her 60th Birth-
day, volume 15240 of Lecture Notes in Computer Sci-
ence, pages 183–201. Springer.
Murtovi, A., Schl
¨
uter, M., and Steffen, B. (2025b). Voting-
based shortcuts through random forests for obtaining
explainable models. In Graf, S., Pettersson, P., and
Steffen, B., editors, Real Time and Such - Essays Ded-
icated to Wang Yi to Celebrate His Scientific Career,
volume 15230 of Lecture Notes in Computer Science,
pages 135–153. Springer.
Quinlan, J. R. (1986). Induction of decision trees. Mach.
Learn., 1(1):81–106.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ”why
should i trust you?” explaining the predictions of any
classifier. In Proceedings of the 22nd ACM SIGKDD
international conference on knowledge discovery and
data mining, pages 1135–1144.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2018). Anchors:
High-precision model-agnostic explanations. In Pro-
ceedings of the AAAI conference on artificial intelli-
gence, volume 32.
Shi, W., Shih, A., Darwiche, A., and Choi, A. (2020). On
tractable representations of binary neural networks.
arXiv preprint arXiv:2004.02082.
Shih, A., Choi, A., and Darwiche, A. (2019). Compiling
bayesian network classifiers into decision graphs. In
Proceedings of the AAAI Conference on Artificial In-
telligence, volume 33, pages 7966–7974.
Shwartz-Ziv, R. and Armon, A. (2022). Tabular data: Deep
learning is not all you need. Inf. Fusion, 81:84–90.
An Efficient Compilation-Based Approach to Explaining Random Forests Through Decision Trees
495