Our evaluation focused on algorithmic properties
such as sparsity and feature relevances for assessing
the ground truth recovery rate (goodness) of the com-
puted counterfactuals. However, it is still unclear how
and if these kinds of explanations of reject are use-
ful and helpful to humans – since it is difficult to im-
plement “human usefulness” as a scoring function, a
proper user study for evaluating the usefulness is nec-
essary.
We leave these aspects as future work.
ACKNOWLEDGEMENT
We gratefully acknowledge funding from the
Deutsche Forschungsgemeinschaft (DFG, German
Research Foundation) for grant TRR 318/1 2021 –
438445824, from the BMWi for grant 01MK20007E,
and the VW-Foundation for the project IMPACT
funded in the frame of the funding line AI and its
Implications for Future Society.
REFERENCES
Aamodt, A. and Plaza., E. (1994). Case-based reasoning:
Foundational issues, methodological variations, and
systemapproaches. AI communications.
Artelt, A. and Hammer, B. (2020). Convex density con-
straints for computing plausible counterfactual expla-
nations. In Farkas, I., Masulli, P., and Wermter, S., ed-
itors, Artificial Neural Networks and Machine Learn-
ing - ICANN 2020 - 29th International Conference
on Artificial Neural Networks, Bratislava, Slovakia,
September 15-18, 2020, Proceedings, Part I, volume
12396 of Lecture Notes in Computer Science, pages
353–365. Springer.
Artelt, A. and Hammer, B. (2021). Convex optimization for
actionable \& plausible counterfactual explanations.
CoRR, abs/2105.07630.
Boyd, S. P. and Vandenberghe, L. (2014). Convex Optimiza-
tion. Cambridge University Press.
Brinkrolf, J. and Hammer, B. (2018). Interpretable machine
learning with reject option. Autom., 66(4):283–290.
Brinkrolf, J. and Hammer, B. (2020a). Time integration
and reject options for probabilistic output of pairwise
LVQ. Neural Comput. Appl., 32(24):18009–18022.
Brinkrolf, J. and Hammer, B. (2020b). Time integration
and reject options for probabilistic output of pairwise
LVQ. Neural Comput. Appl., 32(24):18009–18022.
Brinkrolf, J. and Hammer, B. (2021). Federated learning
vector quantization. In 29th European Symposium
on Artificial Neural Networks, Computational Intel-
ligence and Machine Learning, ESANN 2021, Online
event (Bruges, Belgium), October 6-8, 2021.
Byrne, R. M. J. (2019). Counterfactuals in explainable arti-
ficial intelligence (XAI): evidence from human rea-
soning. In Kraus, S., editor, Proceedings of the
Twenty-Eighth International Joint Conference on Arti-
ficial Intelligence, IJCAI 2019, Macao, China, August
10-16, 2019, pages 6276–6282. ijcai.org.
Chow, C. K. (1970). On optimum recognition error and
reject tradeoff. IEEE Trans. Inf. Theory, 16(1):41–46.
Doshi-Velez, F. and Kim, B. (2017). Towards a rigorous
science of interpretable machine learning.
European parliament and council (2016). General data pro-
tection regulation: Regulation (eu) 2016/679 of the
european parliament.
Fischer, L., Hammer, B., and Wersing, H. (2015a). Efficient
rejection strategies for prototype-based classification.
Neurocomputing, 169:334–342.
Fischer, L., Hammer, B., and Wersing, H. (2015b). Opti-
mum reject options for prototype-based classification.
CoRR, abs/1503.06549.
Fisher, A., Rudin, C., and Dominici, F. (2019). All mod-
els are wrong, but many are useful: Learning a vari-
able’s importance by studying an entire class of pre-
diction models simultaneously. J. Mach. Learn. Res.,
20:177:1–177:81.
Gepperth, A. and Hammer, B. (2016). Incremental learning
algorithms and applications. In 24th European Sym-
posium on Artificial Neural Networks, ESANN 2016,
Bruges, Belgium, April 27-29, 2016.
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Gian-
notti, F., and Pedreschi, D. (2019). A survey of meth-
ods for explaining black box models. ACM Comput.
Surv., 51(5):93:1–93:42.
Herbei, R. and Wegkamp, M. H. (2006). Classification
with reject option. Canadian Journal of Statistics,
34(4):709–721.
Khandani, A. E., Kim, A. J., and Lo, A. (2010). Con-
sumer credit-risk models via machine-learning algo-
rithms. Journal of Banking & Finance, 34(11).
Kim, B., Koyejo, O., and Khanna, R. (2016). Examples
are not enough, learn to criticize! criticism for inter-
pretability. In Lee, D. D., Sugiyama, M., von Luxburg,
U., Guyon, I., and Garnett, R., editors, Advances in
Neural Information Processing Systems 29: Annual
Conference on Neural Information Processing Sys-
tems 2016, December 5-10, 2016, Barcelona, Spain,
pages 2280–2288.
Kirstein, S., Wersing, H., Gross, H.-M., and K
¨
orner, E.
(2012). A life-long learning vector quantization ap-
proach for interactive learning of multiple categories.
Neural networks : the official journal of the Interna-
tional Neural Network Society, 28:90–105.
Looveren, A. V. and Klaise, J. (2021). Interpretable
counterfactual explanations guided by prototypes.
12976:650–665.
Losing, V., Hammer, B., and Wersing, H. (2018). Incremen-
tal on-line learning: A review and comparison of state
of the art algorithms. Neurocomputing, 275:1261–
1274.
Molnar, C. (2019). Interpretable Machine Learning.
Nadeem, M. S. A., Zucker, J., and Hanczar, B. (2010).
Accuracy-rejection curves (arcs) for comparing clas-
sification methods with a reject option. In Dzeroski,
Explaining Reject Options of Learning Vector Quantization Classifiers
257