
ACKNOWLEDGEMENTS
This work was supported by MCIN/AEI/10.13039/
501100011033 under Grant PID2020-117759GB-I00.
REFERENCES
Bellman, R. (1961). Adaptive Control Processes: A Guided
Tour. Princeton University Press.
Berthold, M. R., Cebron, N., Dill, F., Gabriel, T. R.,
K
¨
otter, T., Meinl, T., Ohl, P., Sieb, C., Thiel, K.,
and Wiswedel, B. (2007). KNIME: The Konstanz In-
formation Miner. In Studies in Classification, Data
Analysis, and Knowledge Organization (GfKL 2007).
Springer.
Breiman, L. (1996). Bagging predictors. Machine Learn-
ing, 24(2):123 – 140. Cited by: 1993; All Open Ac-
cess, Bronze Open Access.
Breiman, L. (2001). Random forests. Machine Learning,
45(1):5 – 32.
Browne, M. and Ghidary, S. S. (2003). Convolutional neu-
ral networks for image processing: An application in
robot vision. In Gedeon, T. T. D. and Fung, L. C. C.,
editors, AI 2003: Advances in Artificial Intelligence,
pages 641–652, Berlin, Heidelberg. Springer Berlin
Heidelberg.
Cai, Y., Zhang, W., Zhang, R., Cui, X., and Fang, J. (2020).
Combined use of three machine learning modeling
methods to develop a ten-gene signature for the di-
agnosis of ventilator-associated pneumonia. Medical
Science Monitor, 26.
Chollet, F. et al. (2015). Keras.
Deng, J.-L., Xu, Y.-h., and Wang, G. (2019). Identification
of potential crucial genes and key pathways in breast
cancer using bioinformatic analysis. Frontiers in Ge-
netics, 10:695.
Feltes, B. C., Chandelier, E. B., Grisci, B. I., and Dorn,
M. (2019). Cumida: An extensively curated microar-
ray database for benchmarking and testing of ma-
chine learning approaches in cancer research. Jour-
nal of Computational Biology, 26(4):376–386. PMID:
30789283.
Freund, Y. (1990). Boosting a weak learning algorithm by
majority. page 202 – 216.
Freund, Y. and Schapire, R. E. (1995). A desicion-theoretic
generalization of on-line learning and an application
to boosting. In Vit
´
anyi, P., editor, Computational
Learning Theory, pages 23–37, Berlin, Heidelberg.
Springer Berlin Heidelberg.
Graczyk, M., Lasota, T., Trawi
´
nski, B., and Trawi
´
nski, K.
(2010). Comparison of bagging, boosting and stack-
ing ensembles applied to real estate appraisal. In
Nguyen, N. T., Le, M. T., and
´
Swiatek, J., editors,
Intelligent Information and Database Systems, pages
340–350, Berlin, Heidelberg. Springer Berlin Heidel-
berg.
Hansen, L. and Salamon, P. (1990). Neural network en-
sembles. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 12(10):993–1001.
Hwang, K.-B., Cho, D.-Y., Park, S.-W., Kim, S.-D., and
Zhang, B.-T. (2002). Applying Machine Learning
Techniques to Analysis of Gene Expression Data:
Cancer Diagnosis, pages 167–182. Springer US,
Boston, MA.
Kingsford, C. and Salzberg, S. (2008). What are decision
trees? Biotechnology, 26(9):1011–1012.
Lancashire, L. J., Lemetre, C., and Ball, G. R. (2009). An
introduction to artificial neural networks in bioinfor-
matics—application to complex microarray and mass
spectrometry datasets in cancer studies. Briefings in
Bioinformatics, 10(3):315–329.
Nielsen, M. A. (2018). Neural networks and deep learning.
Pirooznia, M., Yang, J., Yang, M., and Deng, Y. (2008). A
comparative study of different machine learning meth-
ods on microarray gene expression data. BMC ge-
nomics, 9 Suppl 1:S13.
Riedmiller, M. and Braun, H. (1993). A direct adaptive
method for faster backpropagation learning: the rprop
algorithm. In IEEE International Conference on Neu-
ral Networks, pages 586–591 vol.1.
Schapire, R. (1990). The strength of weak learnability. Ma-
chine Learning, 5(2):197–227.
Shalev-Shwartz, S. and Singer, Y. (2008). On the equiv-
alence of weak learnability and linear separability:
New relaxations and efficient boosting algorithms.
volume 80, pages 311–322.
Surowiecki, J. (2004). The Wisdom of Crowds: Why the
Many Are Smarter Than the Few and How Collective
Wisdom Shapes Business, Economies, Societies and
Nations. Little, Brown and Company.
Trunk, G. V. (1979). A problem of dimensionality: A sim-
ple example. IEEE Transactions on Pattern Analysis
and Machine Intelligence, PAMI-1(3):306–307.
Valiant, L. G. (1984). A theory of the learnable. Commun.
ACM, 27(11):1134–1142.
Wellinger, R. E. and Aguilar-Ruiz, J. S. (2022). A new chal-
lenge for data analytics: transposons. BioData Min-
ing, 15(1):9.
Wolpert, D. H. (1992). Stacked generalization. Neural Net-
works, 5(2):241–259.
Zhou, Z. (2012). Ensemble Methods: Foundations and
Algorithms. CHAPMAN & HALL/CRC MACHINE
LEA. Taylor & Francis.
Zhou, Z.-H., Wu, J., and Tang, W. (2002). Ensembling neu-
ral networks: Many could be better than all. Artificial
Intelligence, 137(1):239 – 263.
Random Neural Network Ensemble for Very High Dimensional Datasets
375