tions for non-linear classifier decisions by layer-wise
relevance propagation. PloS one, 10(7):e0130140.
Bardos, A., Mollas, I., Bassiliades, N., and Tsoumakas, G.
(2022). Local explanation of dimensionality reduc-
tion. arXiv preprint arXiv:2204.14012.
Becht, E., McInnes, L., Healy, J., Dutertre, C.-A., Kwok,
I. W., Ng, L. G., Ginhoux, F., and Newell, E. W.
(2019). Dimensionality reduction for visualizing
single-cell data using umap. Nature biotechnology,
37(1):38–44.
Bibal, A., Vu, V. M., Nanfack, G., and Fr
´
enay, B.
(2020). Explaining t-sne embeddings locally by adapt-
ing lime. In ESANN, pages 393–398.
Boyd, S. and Vandenberghe, L. (2004). Convex Optimiza-
tion. Cambridge University Press, New York, NY,
USA.
Bunte, K., Biehl, M., and Hammer, B. (2012). A gen-
eral framework for dimensionality-reducing data visu-
alization mapping. Neural Computation, 24(3):771–
804.
Byrne, R. M. J. (2019). Counterfactuals in explainable ar-
tificial intelligence (xai): Evidence from human rea-
soning. In IJCAI-19.
Doshi-Velez, F. and Kim, B. (2017). Towards a rigorous
science of interpretable machine learning.
Fisher, A., Rudin, C., and Dominici, F. (2018). All Models
are Wrong but many are Useful: Variable Importance
for Black-Box, Proprietary, or Misspecified Prediction
Models, using Model Class Reliance. arXiv e-prints,
page arXiv:1801.01489.
Gisbrecht, A. and Hammer, B. (2015). Data visualization
by nonlinear dimensionality reduction. Wiley Interdis-
ciplinary Reviews: Data Mining and Knowledge Dis-
covery, 5(2):51–73.
Gisbrecht, A., Schulz, A., and Hammer, B. (2015). Para-
metric nonlinear dimensionality reduction using ker-
nel t-sne. Neurocomputing, 147:71–82.
Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep
learning. MIT press.
Kaski, S. and Peltonen, J. (2011). Dimensionality reduc-
tion for data visualization [applications corner]. IEEE
signal processing magazine, 28(2):100–104.
Kim, B., Koyejo, O., and Khanna, R. (2016). Examples
are not enough, learn to criticize! criticism for inter-
pretability. In Advances in Neural Information Pro-
cessing Systems 29.
Kobak, D. and Berens, P. (2019). The art of using t-sne for
single-cell transcriptomics. Nature communications,
10(1):1–14.
Kohonen, T. (1990). The self-organizing map. Proceedings
of the IEEE, 78(9):1464–1480.
Kuhl, U., Artelt, A., and Hammer, B. (2022a). Keep
your friends close and your counterfactuals closer:
Improved learning from closest rather than plausi-
ble counterfactual explanations in an abstract setting.
arXiv preprint arXiv:2205.05515.
Kuhl, U., Artelt, A., and Hammer, B. (2022b). Let’s go to
the alien zoo: Introducing an experimental framework
to study usability of counterfactual explanations for
machine learning. arXiv preprint arXiv:2205.03398.
Lapuschkin, S., W
¨
aldchen, S., Binder, A., Montavon, G.,
Samek, W., and M
¨
uller, K.-R. (2019). Unmasking
clever hans predictors and assessing what machines
really learn. Nature communications, 10(1):1–8.
Lee, J. A. and Verleysen, M. (2007). Nonlinear dimension-
ality reduction, volume 1. Springer.
Looveren, A. V. and Klaise, J. (2019). Interpretable coun-
terfactual explanations guided by prototypes. CoRR,
abs/1907.02584.
McInnes, L., Healy, J., and Melville, J. (2018). Umap: Uni-
form manifold approximation and projection for di-
mension reduction. arXiv preprint arXiv:1802.03426.
Molnar, C. (2019). Interpretable Machine Learning.
Mothilal, R. K., Sharma, A., and Tan, C. (2020). Explain-
ing machine learning classifiers through diverse coun-
terfactual explanations. In Proceedings of the 2020
Conference on Fairness, Accountability, and Trans-
parency, pages 607–617.
N/A (1994). Diabetes data set. https://www4.stat.ncsu.edu
/
∼
boos/var.select/diabetes.html.
Offert, F. (2017). ”i know it when i see it”. visualization and
intuitive interpretability.
parliament, E. and council (2016). General data protection
regulation: Regulation (eu) 2016/679 of the european
parliament.
Pearl, J. (2010). Causal inference. Causality: objectives
and assessment, pages 39–58.
Ribera, M. and Lapedriza, A. (2019). Can we do better
explanations? a proposal of user-centered explainable
ai. In IUI Workshops, volume 2327, page 38.
Rodriguez, P., Caccia, M., Lacoste, A., Zamparo, L.,
Laradji, I., Charlin, L., and Vazquez, D. (2021).
Beyond trivial counterfactual explanations with di-
verse valuable explanations. In Proceedings of the
IEEE/CVF International Conference on Computer Vi-
sion, pages 1056–1065.
Russell, C. (2019). Efficient search for diverse coherent ex-
planations. In Proceedings of the Conference on Fair-
ness, Accountability, and Transparency, pages 20–28.
Samek, W., Wiegand, T., and M
¨
uller, K. (2017). Explain-
able artificial intelligence: Understanding, visualiz-
ing and interpreting deep learning models. CoRR,
abs/1708.08296.
Schulz, A., Gisbrecht, A., and Hammer, B. (2014). Rel-
evance learning for dimensionality reduction. In
ESANN, pages 165–170. Citeseer.
Schulz, A. and Hammer, B. (2015). Metric learning in di-
mensionality reduction. In ICPRAM (1), pages 232–
239.
Schulz, A., Hinder, F., and Hammer, B. (2021). Deepview:
visualizing classification boundaries of deep neural
networks as scatter plots using discriminative dimen-
sionality reduction. In Proceedings of IJCAI, pages
2305–2311.
Tjoa, E. and Guan, C. (2019). A survey on explainable
artificial intelligence (XAI): towards medical XAI.
CoRR, abs/1907.07374.
Van Der Maaten, L. (2009). Learning a parametric embed-
ding by preserving local structure. In Artificial intelli-
gence and statistics, pages 384–391. PMLR.
"Why Here and not There?": Diverse Contrasting Explanations of Dimensionality Reduction
37