In Proceedings of the 2020 conference on fairness, ac-
countability, and transparency, pages 648–657.
Collaris, D. and van Wijk, J. J. (2020). Explainexplore:
Visual exploration of machine learning explanations.
In 2020 IEEE Pacific Visualization Symposium (Paci-
ficVis), pages 26–35. IEEE.
Goodfellow, S. D., Goodwin, A., Greer, R., Laussen, P. C.,
Mazwi, M., and Eytan, D. (2018). Towards under-
standing ecg rhythm classification using convolutional
neural networks and attention mappings. In Machine
learning for healthcare conference, pages 83–101.
PMLR.
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Gian-
notti, F., and Pedreschi, D. (2018). A survey of meth-
ods for explaining black box models. ACM computing
surveys (CSUR), 51(5):1–42.
Hohman, F., Kahng, M., Pienta, R., and Chau, D. H. (2018).
Visual analytics in deep learning: An interrogative
survey for the next frontiers. IEEE transactions on vi-
sualization and computer graphics, 25(8):2674–2693.
Ismail, A. A., Gunady, M., Corrada Bravo, H., and Feizi, S.
(2020). Benchmarking Deep Learning Interpretability
in Time Series Predictions. In Larochelle, H., Ran-
zato, M., Hadsell, R., Balcan, M. F., and Lin, H.,
editors, Advances in Neural Information Processing
Systems, volume 33, pages 6441–6452. Curran Asso-
ciates, Inc.
Klaise, J., Looveren, A. V., Vacanti, G., and Coca, A.
(2021). Alibi explain: Algorithms for explaining ma-
chine learning models. Journal of Machine Learning
Research, 22(181):1–7.
Kokhlikyan, N., Miglani, V., Martin, M., Wang, E., Alsal-
lakh, B., Reynolds, J., Melnikov, A., Kliushkina, N.,
Araya, C., Yan, S., and Reblitz-Richardson, O. (2020).
Captum: A unified and generic model interpretability
library for pytorch.
Krause, J., Dasgupta, A., Swartz, J., Aphinyanaphongs, Y.,
and Bertini, E. (2017). A workflow for visual diag-
nostics of binary classifiers using instance-level expla-
nations. In 2017 IEEE Conference on Visual Analyt-
ics Science and Technology (VAST), pages 162–172.
IEEE.
Langer, M., Oster, D., Speith, T., Hermanns, H., K
¨
astner,
L., Schmidt, E., Sesing, A., and Baum, K. (2021).
What do we want from explainable artificial intel-
ligence (xai)?–a stakeholder perspective on xai and
a conceptual model guiding interdisciplinary xai re-
search. Artificial Intelligence, 296:103473.
Li, Y., Fujiwara, T., Choi, Y. K., Kim, K. K., and Ma, K.-
L. (2020). A visual analytics system for multi-model
comparison on clinical data predictions. Visual Infor-
matics, 4(2):122–131.
Lundberg, S. M. and Lee, S.-I. (2017). A Unified Approach
to Interpreting Model Predictions. Advances in Neural
Information Processing Systems, 30:4765–4774.
Mutlu, B., Veas, E., and Trattner, C. (2016). Vizrec:
Recommending personalized visualizations. ACM
Transactions on Interactive Intelligent Systems (TiiS),
6(4):1–39.
Preece, A., Harborne, D., Braines, D., Tomsett, R., and
Chakraborty, S. (2018). Stakeholders in explainable
AI. arXiv preprint arXiv:1810.00184.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ” Why
should i trust you?” Explaining the predictions of any
classifier. In Proceedings of the 22nd ACM SIGKDD
international conference on knowledge discovery and
data mining, pages 1135–1144.
Rojat, T., Puget, R., Filliat, D., Del Ser, J., Gelin, R., and
D
´
ıaz-Rodr
´
ıguez, N. (2021). Explainable artificial in-
telligence (xai) on timeseries data: A survey. arXiv
preprint arXiv:2104.00950.
Samek, W., Binder, A., Montavon, G., Lapuschkin, S., and
M
¨
uller, K.-R. (2016). Evaluating the visualization
of what a deep neural network has learned. IEEE
transactions on neural networks and learning systems,
28(11):2660–2673.
Schlegel, U., Arnout, H., El-Assady, M., Oelke, D., and
Keim, D. (2019). Towards A Rigorous Evaluation Of
XAI Methods On Time Series. 2019 IEEE/CVF In-
ternational Conference on Computer Vision Workshop
(ICCVW), pages 4197–4201.
Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R.,
Parikh, D., and Batra, D. (2017). Grad-CAM: Vi-
sual Explanations From Deep Networks via Gradient-
Based Localization. In Proceedings of the IEEE Inter-
national Conference on Computer Vision (ICCV).
Shrikumar, A., Greenside, P., and Kundaje, A. (2017).
Learning important features through propagating ac-
tivation differences. In International Conference on
Machine Learning, pages 3145–3153. PMLR.
Shrikumar, A., Greenside, P., Shcherbina, A., and Kundaje,
A. (2016). Not Just a Black Box: Learning Important
Features Through Propagating Activation Differences.
CoRR, abs/1605.0.
Simonyan, K., Vedaldi, A., and Zisserman, A. (2014).
Deep Inside Convolutional Networks: Visualising Im-
age Classification Models and Saliency Maps. CoRR,
abs/1312.6.
Spinner, T., Schlegel, U., Sch
¨
afer, H., and El-Assady, M.
(2019). explainer: A visual analytics framework for
interactive and explainable machine learning. IEEE
transactions on visualization and computer graphics,
26(1):1064–1074.
Springenberg, J. T., Dosovitskiy, A., Brox, T., and Ried-
miller, M. (2014). Striving for simplicity: The all con-
volutional net. arXiv preprint arXiv:1412.6806.
Sundararajan, M., Taly, A., and Yan, Q. (2017). Axiomatic
attribution for deep networks. In International Confer-
ence on Machine Learning, pages 3319–3328. PMLR.
ˇ
Simi
´
c, I., Sabol, V., and Veas, E. (2022). Perturbation effect:
A metric to counter misleading validation of feature
attribution. In Proceedings of the 31st ACM Interna-
tional Conference on Information & Knowledge Man-
agement, CIKM ’22, page 1798–1807, New York, NY,
USA. Association for Computing Machinery.
Wexler, J., Pushkarna, M., Bolukbasi, T., Wattenberg, M.,
Vi
´
egas, F., and Wilson, J. (2019). The what-if tool:
Interactive probing of machine learning models. IEEE
XAIVIER the Savior: A Web Application for Interactive Explainable AI in Time Series Data
177