on a fixed-resolution image. Finally, SDBM has virtu-
ally no parameters to tune (apart from the resolution of
the desired final image) which makes it easier to use
than DBM.
Future work can target several directions. We be-
lieve a very relevant one to be the generation of maps
for multi-output classifiers, i.e., classifiers that can out-
put more than a single class for a sample. Secondly,
we consider organizing more quantitative studies to
actually gauge which are the interpretation errors that
SDBM maps generate when users consider them to
assess and/or compare the behavior of different clas-
sifiers, which is the core use-case that decision maps
have been proposed for. Thirdly, we consider adapting
SDBM to help the understanding of semantic segmen-
tation models. Last but not least, the packaging of
SDBM into a reusable library that can be integrated
into typical ML pipelines can help it gain widespread
usage.
ACKNOWLEDGMENTS
This study was financed in part by the Coordena
c¸
˜
ao
de Aperfei
c¸
oamento de Pessoal de N
´
ıvel Superior -
Brasil (CAPES) - Finance Code 001, and by FAPESP
grants 2015/22308-2, 2017/25835-9 and 2020/13275-
1, Brazil.
REFERENCES
Amorim, E., Brazil, E. V., Daniels, J., Joia, P., Nonato, L. G.,
and Sousa, M. C. (2012). iLAMP: Exploring high-
dimensional spacing through backward multidimen-
sional projection. In Proc. IEEE VAST, pages 53–62.
Anguita, D., Ghio, A., Oneto, L., Parra, X., and Reyes-Ortiz,
J. L. (2012). Human activity recognition on smart-
phones using a multiclass hardware-friendly support
vector machine. In Proc. Intl. Workshop on Ambient
Assisted Living, pages 216–223. Springer.
Breiman, L. (2001). Random forests. Machine learning,
45(1):5–32. Springer.
Chan, D., Rao, R., Huang, F., and Canny, J. (2018). T-SNE-
CUDA: GPU-accelerated t-SNE and its applications to
modern data. In Proc. SBAC-PAD, pages 330–338.
Chollet, F. and others (2015). Keras.
Cortes, C. and Vapnik, V. (1995). Support-vector networks.
Machine learning, 20(3):273–297. Springer.
Cox, D. R. (1958). The regression analysis of binary se-
quences. Journal of the Royal Statistical Society: Se-
ries B (Methodological), 20(2):215–232. Wiley Online
Library.
Cunningham, J. and Ghahramani, Z. (2015). Linear dimen-
sionality reduction: Survey, insights, and generaliza-
tions. JMLR, 16:2859–2900.
Engel, D., Hattenberger, L., and Hamann, B. (2012). A
survey of dimension reduction methods for high-
dimensional data analysis and visualization. In Proc.
IRTG Workshop, volume 27, pages 135–149. Schloss
Dagstuhl.
Espadoto, M., Hirata, N. S., and Telea, A. C. (2021). Self-
supervised dimensionality reduction with neural net-
works and pseudo-labeling. In Proc. IVAPP, pages
27–37. SCITEPRESS.
Espadoto, M., Hirata, N. S. T., and Telea, A. C. (2020). Deep
learning multidimensional projections. Information
Visualization, 19(3):247–269. SAGE.
Espadoto, M., Martins, R. M., Kerren, A., Hirata, N. S.,
and Telea, A. C. (2019a). Toward a quantitative sur-
vey of dimension reduction techniques. IEEE TVCG,
27(3):2153–2173.
Espadoto, M., Rodrigues, F. C. M., Hirata, N. S. T., Hi-
rata Jr., R., and Telea, A. C. (2019b). Deep learning
inverse multidimensional projections. In Proc. EuroVA.
Eurographics.
Garcia, R., Telea, A., da Silva, B., Torresen, J., and Comba,
J. (2018). A task-and-technique centered survey on
visual analytics for deep learning model engineering.
Computers and Graphics, 77:30–49. Elsevier.
Hoffman, P. and Grinstein, G. (2002). A survey of visualiza-
tions for high-dimensional data mining. Information
Visualization in Data Mining and Knowledge Discov-
ery, 104:47–82. Morgan Kaufmann.
Hunter, J. D. (2007). Matplotlib: A 2d graphics environ-
ment. Computing in science & engineering, 9(3):90–
95. IEEE.
Jolliffe, I. T. (1986). Principal component analysis and factor
analysis. In Principal Component Analysis, pages 115–
128. Springer.
LeCun, Y. and Cortes, C. (2010). MNIST handwritten digits
dataset. http://yann.lecun.com/exdb/mnist.
Liu, S., Maljovec, D., Wang, B., Bremer, P.-T., and Pas-
cucci, V. (2015). Visualizing high-dimensional data:
Advances in the past decade. IEEE TVCG, 23(3):1249–
1268.
Lundberg, S. M. and Lee, S.-I. (2017). A unified approach
to interpreting model predictions. In Proc. NIPS, pages
4768–4777.
Maaten, L. v. d. (2014). Accelerating t-SNE using tree-based
algorithms. JMLR, 15:3221–3245.
Maaten, L. v. d. and Hinton, G. (2008). Visualizing data
using t-SNE. JMLR, 9:2579–2605.
Maaten, L. v. d. and Postma, E. (2009). Dimensionality
reduction: A comparative review. Technical report,
Tilburg University, Netherlands.
McInnes, L. and Healy, J. (2018). UMAP: Uniform manifold
approximation and projection for dimension reduction.
arXiv:1802.03426v1 [stat.ML].
Nonato, L. and Aupetit, M. (2018). Multidimensional pro-
jection for visual analytics: Linking techniques with
distortions, tasks, and layout enrichment. IEEE TVCG.
Pezzotti, N., H
¨
ollt, T., Lelieveldt, B., Eisemann, E., and
Vilanova, A. (2016). Hierarchical stochastic neighbor
embedding. Computer Graphics Forum, 35(3):21–30.
Wiley Online Library.
IVAPP 2022 - 13th International Conference on Information Visualization Theory and Applications
86