Now, our method supports only numerical input.
Therefore to improve the range of application, it must
be extended to the mixed data input: numerical and
categorical data. Our method sometimes does not
work for the classifier that trained an imbalanced
dataset. Therefore, we have to improve the robust-
ness.
ACKNOWLEDGEMENTS
This work was partially supported by JSPS Kakenhi
20H04143 and 17K00002.
REFERENCES
Aivodji, U., Arai, H., Fortineau, O., Gambs, S., Hara, S.,
and Tapp, A. (2019). Fairwashing: the risk of ra-
tionalization. volume 97 of Proceedings of Machine
Learning Research, pages 161–170, Long Beach, Cal-
ifornia, USA. PMLR.
Angelino, E., Larus-Stone, N., Alabi, D., Seltzer, M., and
Rudin, C. (2017). Learning certifiably optimal rule
lists. In Proceedings of the 23rd ACM SIGKDD In-
ternational Conference on Knowledge Discovery and
Data Mining, pages 35–44. ACM.
Asano., K. and Chun., J. (2021). Post-hoc explanation us-
ing a mimic rule for numerical data. In Proceedings of
the 13th International Conference on Agents and Ar-
tificial Intelligence - Volume 2: ICAART,, pages 768–
774. INSTICC, SciTePress.
Asano, K., Chun, J., Koike, A., and Tokuyama, T. (2019).
Model-agnostic explanations for decisions using min-
imal patterns. In Artificial Neural Networks and Ma-
chine Learning – ICANN 2019: Theoretical Neural
Computation, pages 241–252, Cham. Springer Inter-
national Publishing.
Breiman, L. and Shang, N. (1996). Born again trees. Uni-
versity of California, Berkeley, Berkeley, CA, Techni-
cal Report.
Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., and
Elhadad, N. (2015). Intelligible models for healthcare:
Predicting pneumonia risk and hospital 30-day read-
mission. In Proceedings of the 21th ACM SIGKDD In-
ternational Conference on Knowledge Discovery and
Data Mining, pages 1721–1730. ACM.
Dash, S., Gunluk, O., and Wei, D. (2018). Boolean decision
rules via column generation. In Advances in Neural
Information Processing Systems, pages 4655–4665.
Deng, H. (2019). Interpreting tree ensembles with intrees.
International Journal of Data Science and Analytics,
7(4):277–287.
Dumitrescu, A. and Jiang, M. (2013). On the largest
empty axis-parallel box amidst n points. Algorith-
mica, 66(2):225–248.
Dyer, M. (1992). A class of convex programs with appli-
cations to computational geometry. In Proceedings of
the Eighth Annual Symposium on Computational Ge-
ometry, SCG ’92, page 9–15, New York, NY, USA.
Association for Computing Machinery.
Fischer, K., G
¨
artner, B., and Kutz, M. (2003). Fast smallest-
enclosing-ball computation in high dimensions. In
Proc. 11th European Symposium on Algorithms (ESA,
pages 630–641. SpringerVerlag.
Friedman, J. H. (2001). Greedy function approximation: a
gradient boosting machine. Annals of statistics, pages
1189–1232.
Guidotti, R., Monreale, A., Ruggieri, S., Pedreschi, D.,
Turini, F., and Giannotti, F. (2018a). Local rule-based
explanations of black box decision systems. arXiv
preprint arXiv:1805.10820.
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Gian-
notti, F., and Pedreschi, D. (2018b). A survey of meth-
ods for explaining black box models. ACM Computing
Surveys (CSUR), 51(5):93.
Hara, S. and Hayashi, K. (2018). Making tree ensembles in-
terpretable: A bayesian model selection approach. In
Storkey, A. and Perez-Cruz, F., editors, Proceedings
of the Twenty-First International Conference on Ar-
tificial Intelligence and Statistics, volume 84 of Pro-
ceedings of Machine Learning Research, pages 77–
85. PMLR.
Lakkaraju, H., Aguiar, E., Shan, C., Miller, D., Bhanpuri,
N., Ghani, R., and Addison, K. L. (2015). A ma-
chine learning framework to identify students at risk
of adverse academic outcomes. In Proceedings of
the 21th ACM SIGKDD international conference on
knowledge discovery and data mining, pages 1909–
1918. ACM.
Lakkaraju, H., Bach, S. H., and Leskovec, J. (2016). Inter-
pretable decision sets: A joint framework for descrip-
tion and prediction. In Proceedings of the 22nd ACM
SIGKDD international conference on knowledge dis-
covery and data mining, pages 1675–1684. ACM.
Laugel, T., Lesot, M.-J., Marsala, C., Renard, X., and De-
tyniecki, M. (2019). The dangers of post-hoc inter-
pretability: Unjustified counterfactual explanations.
In Proceedings of the Twenty-Eighth International
Joint Conference on Artificial Intelligence, IJCAI-19,
pages 2801–2807. International Joint Conferences on
Artificial Intelligence Organization.
Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin,
J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N.,
and Lee, S.-I. (2020). From local explanations to
global understanding with explainable ai for trees. Na-
ture machine intelligence, 2(1):56–67.
Lundberg, S. M. and Lee, S.-I. (2017). A unified approach
to interpreting model predictions. In Advances in neu-
ral information processing systems, pages 4765–4774.
Pedreschi, D., Giannotti, F., Guidotti, R., Monreale, A.,
Ruggieri, S., and Turini, F. (2019). Meaningful expla-
nations of black box ai decision systems. In Proceed-
ings of the AAAI conference on artificial intelligence,
volume 33, pages 9780–9784.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2018). An-
chors: High-precision model-agnostic explanations.
In AAAI, pages 1527–1535.
ICAART 2022 - 14th International Conference on Agents and Artificial Intelligence
242