through grants 01IS20088B (“KnowhowAnalyzer”)
and 01IS22062 (“AI research group FFS-AI”).
REFERENCES
Adam, S. P., Alexandropoulos, S.-A. N., Pardalos, P. M.,
and Vrahatis, M. N. (2019). No free lunch theorem:
A review. In Approximation and Optimization: Al-
gorithms, Complexity and Applications, pages 57–82.
Springer.
Atzberger, D., Cech, T., Scheibel, W., Limberger, D.,
D
¨
ollner, J., and Trapp, M. (2022). A benchmark for
the use of topic models for text visualization tasks. In
Proc. VINCI ’22, pages 17:1–4. ACM.
Becker, M., Lippel, J., Stuhlsatz, A., and Zielke, T. (2020).
Robust dimensionality reduction for data visualiza-
tion with deep neural networks. Graphical Models,
108:101060:1–15.
Bergstra, J. and Bengio, Y. (2012). Random search for
hyper-parameter optimization. JMLR, 13(10):281–
305.
Bredius, C., Tian, Z., Telea, A., Mulawade, R. N., Garth, C.,
Wiebel, A., Schlegel, U., Schiegg, S., and Keim, D. A.
(2022). Visual exploration of neural network projec-
tion stability. In Proc. MLVIS ’22, pages 1068:1–5.
EG.
Breunig, M. M., Kriegel, H.-P., Ng, R. T., and Sander, J.
(2000). LOF: Identifying density-based local outliers.
SIGMOD Record, 29(2):93–104.
Cali
´
nski, T. and Harabasz, J. (1974). A dendrite method
for cluster analysis. Communications in Statistics,
3(1):1–27.
Davies, D. L. and Bouldin, D. W. (1979). A cluster separa-
tion measure. TPAMI, 1(2):224–227.
Espadoto, M., Hirata, N. S., Falc
˜
ao, A. X., and Telea, A. C.
(2020a). Improving neural network-based multidi-
mensional projections. In Proc. IVAPP ’20, pages 29–
41. INSTICC, SciTePress.
Espadoto, M., Hirata, N. S., and Telea, A. C. (2021a). Self-
supervised dimensionality reduction with neural net-
works and pseudo-labeling. In Proc. IVAPP ’21, pages
27–37. INSTICC, SciTePress.
Espadoto, M., Hirata, N. S. T., and Telea, A. C. (2020b).
Deep learning multidimensional projections. Informa-
tion Visualization, 19(3):247–269.
Espadoto, M., Martins, R. M., Kerren, A., Hirata, N. S. T.,
and Telea, A. C. (2021b). Toward a quantitative
survey of dimension reduction techniques. TVCG,
27(3):2153–2173.
Fournier, Q. and Aloise, D. (2019). Empirical comparison
between autoencoders and traditional dimensionality
reduction methods. In Proc. AIKE ’19, pages 211–
214. IEEE.
Gehan, E. A. (1965). A generalized Wilcoxon test
for comparing arbitrarily singly-censored samples.
Biometrika, 52(1–2):203–224.
Halkidi, M. and Vazirgiannis, M. (2001). Clustering validity
assessment: finding the optimal partitioning of a data
set. In Proc. ICDM ’01, pages 187–194. IEEE.
Hinton, G. E. and Salakhutdinov, R. R. (2006). Reducing
the dimensionality of data with neural networks. Sci-
ence, 313(5786):504–507.
Hodges, J. L. (1955). A bivariate sign test. The Annals of
Mathematical Statistics, 26(3):523–527.
Joia, P., Coimbra, D., Cuminato, J. A., Paulovich, F. V., and
Nonato, L. G. (2011). Local affine multidimensional
projection. TVCG, 17(12):2563–2571.
Jolliffe, I. (2005). Principal component analysis. In En-
cyclopedia of Statistics in Behavioral Science. John
Wiley & Sons, Ltd.
Kim, Y., Espadoto, M., Trager, S., Roerdink, J. B., and
Telea, A. (2022). SDR-NNP: Sharpened dimension-
ality reduction with neural networks. In Proc. IVAPP
’22, pages 63–76. INSTICC, SciTePress.
Kwon, B. C., Eysenbach, B., Verma, J., Ng, K., De Filippi,
C., Stewart, W. F., and Perer, A. (2018). Clustervision:
Visual supervision of unsupervised clustering. TVCG,
24(1):142–151.
Lisi, M. (2007). Some remarks on the cantor pairing func-
tion. Le Matematiche, 62(1):55–65.
Liu, F. T., Ting, K. M., and Zhou, Z.-H. (2008). Isolation
forest. In Proc. ICDM ’08, pages 413–422. IEEE.
MacFarland, T. W. and Yates, J. M. (2016). Mann–whitney
U test. In Introduction to Nonparametric Statistics
for the Biological Sciences Using R, pages 103–132.
Springer.
Makuch, R. W. and Johnson, M. F. (1986). Some issues
in the design and interpretation of “Negative” clinical
studies. Archives of Internal Medicine, 146(5):986–
989.
McInnes, L., Healy, J., and Melville, J. (2020).
UMAP: Uniform manifold approximation and pro-
jection for dimension reduction. arXiv CoRR,
stat.ML(1802.03426). pre-print.
Myers, L. and Sirois, M. J. (2004). Spearman correlation
coefficients, differences between. In Encyclopedia of
Statistical Sciences. John Wiley & Sons, Ltd.
Rousseeuw, P. J. (1987). Silhouettes: A graphical aid to
the interpretation and validation of cluster analysis.
Journal of Computational and Applied Mathematics,
20:53–65.
Saxena, D., Yadav, P., and Kantharia, N. (2011). Nonsignif-
icant P values cannot prove null hypothesis: Absence
of evidence is not evidence of absence. Journal of
Pharmacy and Bioallied Sciences, 3(3):465–466.
van der Maaten, L. and Hinton, G. (2008). Visualizing data
using t-SNE. JMLR, 9(11):2579–2605.
Yang, L. and Shami, A. (2020). On hyperparameter opti-
mization of machine learning algorithms: Theory and
practice. Neurocomputing, 415:295–316.
APPENDIX
The auxiliary material is available under 10.5281/zen-
odo.7501914 and contains implementation details,
the QQ-Plots, and all evaluation results.
IVAPP 2023 - 14th International Conference on Information Visualization Theory and Applications
194