In ICML workshop on human interpretability in ma-
chine learning (WHI 2016), New York, NY. http://arxiv.
org/abs/1606.08813 v1.
Harley, A. W. (2015). An interactive node-link visualization
of convolutional neural networks. In ISVC, pages 867–
877.
Hohman, F., Kahng, M., Pienta, R., and Chau, D. H. (2018).
Visual analytics in deep learning: An interrogative
survey for the next frontiers. IEEE Transactions on
Visualization and Computer Graphics.
Hohman, F., Park, H., Robinson, C., and Chau, D. H. (2020).
Summit: Scaling deep learning interpretability by vi-
sualizing activation and attribution summarizations.
IEEE Transactions on Visualization and Computer
Graphics (TVCG).
Kahng, M., Andrews, P. Y., Kalro, A., and Chau, D. H. P.
(2017). [activis: Visual exploration of industry-scale
deep neural network models. IEEE transactions on
visualization and computer graphics, 24(1):88–97.
Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Im-
agenet classification with deep convolutional neural
networks. In Advances in neural information process-
ing systems, pages 1097–1105.
LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning.
nature, 521(7553):436.
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., et al. (1998).
Gradient-based learning applied to document recogni-
tion. Proceedings of the IEEE, 86(11):2278–2324.
Liu, D., Cui, W., Jin, K., Guo, Y., and Qu, H. (2018). Deep-
tracker: Visualizing the training process of convolu-
tional neural networks. ACM Transactions on Intelli-
gent Systems and Technology (TIST), 10(1):6.
Liu, M., Shi, J., Li, Z., Li, C., Zhu, J., and Liu, S. (2016).
Towards better analysis of deep convolutional neural
networks. IEEE transactions on visualization and com-
puter graphics, 23(1):91–100.
Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A.,
Antonoglou, I., Wierstra, D., and Riedmiller, M. (2013).
Playing atari with deep reinforcement learning. arXiv
preprint arXiv:1312.5602.
Montavon, G., Samek, W., and Müller, K.-R. (2018). Meth-
ods for interpreting and understanding deep neural
networks. Digital Signal Processing, 73:1–15.
Montavon, G., Samek, W., and Müller, K. (2017). Tutorial:
Implementing deep taylor decomposition / lrp.
Olah, C., Satyanarayan, A., Johnson, I., Carter, S.,
Schubert, L., Ye, K., and Mordvintsev, A. (2018).
The building blocks of interpretability. Distill.
https://distill.pub/2018/building-blocks.
Pezzotti, N., Höllt, T., Van Gemert, J., Lelieveldt, B. P.,
Eisemann, E., and Vilanova, A. (2017). Deepeyes:
Progressive visual analytics for designing deep neu-
ral networks. IEEE transactions on visualization and
computer graphics, 24(1):98–108.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). Why
should i trust you?: Explaining the predictions of any
classifier. In Proceedings of the 22nd ACM SIGKDD
international conference on knowledge discovery and
data mining, pages 1135–1144.
Saeed, N., Nam, H., Haq, M. I. U., and Muhammad Saqib,
D. B. (2018). A survey on multidimensional scaling.
ACM Computing Surveys (CSUR), 51(3):47.
Simonyan, K. and Zisserman, A. (2014). Very deep con-
volutional networks for large-scale image recognition.
arXiv preprint arXiv:1409.1556.
Smilkov, D., Carter, S., Sculley, D., Viégas, F. B., and Wat-
tenberg, M. (2016). Direct-manipulation visualization
of deep networks. In ICML 2016 Visualization Work-
shop.
Springenberg, J. T., Dosovitskiy, A., Brox, T., and Ried-
miller, M. (2014). Striving for simplicity: The all
convolutional net. arXiv preprint arXiv:1412.6806.
Sugiyama, K., Tagawa, S., and Toda, M. (1981). Methods for
visual understanding of hierarchical system structures.
IEEE Transactions on Systems, Man, and Cybernetics,
11(2):109–125.
Tanahashi, Y., Hsueh, C.-H., and Ma, K.-L. (2015). An effi-
cient framework for generating storyline visualizations
from streaming data. IEEE transactions on visualiza-
tion and computer graphics, 21(6):730–742.
Telea, A. and Auber, D. (2008). Code flows: Visualizing
structural evolution of source code. Computer Graph-
ics -New York- Association for Computing Machinery-
Forum, pages 831–938.
Xiao, H., Rasul, K., and Vollgraf, R. (2017). Fashion-mnist:
a novel image dataset for benchmarking machine learn-
ing algorithms.
Zeiler, M. D. and Fergus, R. (2014). Visualizing and under-
standing convolutional networks. In European confer-
ence on computer vision, pages 818–833.
Zeng, H., Haleem, H., Plantaz, X., Cao, N., and Qu,
H. (2017). Cnncomparator: Comparative analytics
of convolutional neural networks. arXiv preprint
arXiv:1710.05285.
Zhang, J., Wang, Y., Molino, P., Li, L., and Ebert, D. S.
(2018). Manifold: A model-agnostic framework for
interpretation and diagnosis of machine learning mod-
els. IEEE transactions on visualization and computer
graphics, 25(1):364–373.
Deep Dive into Deep Neural Networks with Flows
239