forced to be aligned; the visual learning space can
generate distortions in the projected data, as a result of
the matching process between different projections.
As an optimization technique, the method employed
to generate the projections also has random factors,
that need to be accounted for if different sequences of
projections are to be compared as the resulting vector
fields can vary. Additionally, a single vector field may
not be enough to display subtleties of some networks.
There is the possibility of generating a visualization
from multiple vector fields, in order to estimate and
explore more complex visual learning spaces.
6 CONCLUSION
In this paper, we presented a new approach for
projection-based ANN hidden layer visualization that
uses different techniques to provide insights on how
knowledge is generated in a DNN through training
and how abstract representations are formed between
layers. Our focus was to a) adopt a flow-based model
to represent a transition space between projections to
remove point-based clutter and b) present a projection
system capable of holding an aligned view for several
projections, a limitation found in most t-SNE based
techniques. Our approach has other useful character-
istics, namely the ability to compare different data and
to align them using a common feature (e.g., compar-
ing the results of different models applied over the
same objects, or how different parts of a same system
process data) and the generation of a space that tie
different projections together, that may support other
visualization aids in the future. Using this visualiza-
tion, we performed experiments that aim to show how
they can be used to generate knowledge. Our analysis
was able to go further in certain aspects of the training
process of neural networks, attempting to explain sub-
tle aspects of how knowledge is generated in a DNN
system. There are many future research directions re-
garding the work depicted in this paper: as an intro-
ductory study using these methods, the network archi-
tectures used and experiments conducted are the ones
commonly depicted in literature, and more complex
systems and datasets should provide other interesting
analysis opportunities. Additionally, the learning pro-
jection space and vector fields as defined in this paper
assume data from a sequential nature, but there is no
hard restriction limiting them to this type of data.
ACKNOWLEDGMENTS
We would like to thank CAPES and FAPESP
(2017/08817-7, 2015/08118-6) for the financial sup-
port.
REFERENCES
Babiker, H. K. B. and Goebel, R. (2017). An intro-
duction to deep visual explanation. arXiv preprint
arXiv:1711.09482.
Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N.,
Tzeng, E., and Darrell, T. (2014). Decaf: A deep con-
volutional activation feature for generic visual recog-
nition. In International conference on machine learn-
ing, pages 647–655.
Erhan, D., Bengio, Y., Courville, A., and Vincent, P. (2009).
Visualizing higher-layer features of a deep network.
University of Montreal, 1341(3):1.
Ferreira, N., Klosowski, J. T., Scheidegger, C. E., and Silva,
C. T. (2013). Vector field k-means: Clustering trajec-
tories by fitting multiple vector fields. In Computer
Graphics Forum, volume 32, pages 201–210. Wiley
Online Library.
Hamel, P. and Eck, D. (2010). Learning features from mu-
sic audio with deep belief networks. In ISMIR, vol-
ume 10, pages 339–344. Utrecht, The Netherlands.
Hilasaca, G. and Paulovich, F. (2019). Visual feature fusion
and its application to support unsupervised clustering
tasks. arXiv preprint arXiv:1901.05556.
Hohman, F. M., Kahng, M., Pienta, R., and Chau, D. H.
(2018). Visual analytics in deep learning: An interrog-
ative survey for the next frontiers. IEEE Transactions
on Visualization and Computer Graphics.
Joia, P., Coimbra, D., Cuminato, J. A., Paulovich, F. V., and
Nonato, L. G. (2011). Local affine multidimensional
projection. IEEE Transactions on Visualization and
Computer Graphics, 17(12):2563–2571.
Kahng, M., Andrews, P. Y., Kalro, A., and Chau, D. H. P.
(2018). A cti v is: Visual exploration of industry-scale
deep neural network models. IEEE transactions on
visualization and computer graphics, 24(1):88–97.
LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learn-
ing. nature, 521(7553):436.
Liu, M., Shi, J., Li, Z., Li, C., Zhu, J., and Liu, S. (2017).
Towards better analysis of deep convolutional neu-
ral networks. IEEE transactions on visualization and
computer graphics, 23(1):91–100.
Mahendran, A. and Vedaldi, A. (2015). Understanding deep
image representations by inverting them. In Proceed-
ings of the IEEE conference on computer vision and
pattern recognition, pages 5188–5196.
Mahendran, A. and Vedaldi, A. (2016). Visualizing
deep convolutional neural networks using natural pre-
images. International Journal of Computer Vision,
120(3):233–255.
IVAPP 2020 - 11th International Conference on Information Visualization Theory and Applications
120