![](bgc.png)
Chen, X., Zhang, J., and Wang, Z. (2021). Peek-a-boo:
What (more) is disguised in a randomly weighted neu-
ral network, and how to find it efficiently. In Interna-
tional Conference on Learning Representations.
Frankle, J. and Carbin, M. (2018). The lottery ticket hypoth-
esis: Finding sparse, trainable neural networks. arXiv
preprint arXiv:1803.03635.
Gaier, A. and Ha, D. (2019). Weight agnostic neural net-
works. Advances in neural information processing
systems, 32.
He, K., Zhang, X., Ren, S., and Sun, J. (2015). Delv-
ing deep into rectifiers: Surpassing human-level per-
formance on imagenet classification. In Proceedings
of the IEEE international conference on computer vi-
sion, pages 1026–1034.
Hinton, G., Vinyals, O., and Dean, J. (2015). Distilling
the knowledge in a neural network. arXiv preprint
arXiv:1503.02531.
Huang, G.-B., Zhu, Q.-Y., and Siew, C.-K. (2006). Extreme
learning machine: theory and applications. Neuro-
computing, 70(1-3):489–501.
Jackson, A., Schoots, N., Ahantab, A., Luck, M., and
Black, E. (2023). Finding sparse initialisations using
neuroevolutionary ticket search (nets). In Artificial
Life Conference Proceedings 35, volume 2023, page
110. MIT Press One Rogers Street, Cambridge, MA
02142-1209, USA journals-info . . . .
Kasun, L. L. C., Zhou, H., Huang, G.-B., and Vong, C. M.
(2013). Representational learning with elms for big
data. IEEE Intelligent Systems.
Lee, N., Ajanthan, T., and Torr, P. H. (2018). Snip: Single-
shot network pruning based on connection sensitivity.
arXiv preprint arXiv:1810.02340.
Levenshtein, V. I. et al. (1966). Binary codes capable of cor-
recting deletions, insertions, and reversals. In Soviet
physics doklady, volume 10, pages 707–710. Soviet
Union.
Li, T. and Srikumar, V. (2019). Augmenting neural net-
works with first-order logic. In Proceedings of the
57th Annual Meeting of the Association for Compu-
tational Linguistics.
Malach, E., Yehudai, G., Shalev-Schwartz, S., and Shamir,
O. (2020). Proving the lottery ticket hypothesis: Prun-
ing is all you need. In International Conference on
Machine Learning, pages 6682–6691. PMLR.
Orseau, L., Hutter, M., and Rivasplata, O. (2020). Loga-
rithmic pruning is all you need. Advances in Neural
Information Processing Systems, 33:2925–2934.
Patel, Y. and Matas, J. (2021). Feds-filtered edit distance
surrogate. In International Conference on Document
Analysis and Recognition, pages 171–186. Springer.
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V.,
Thirion, B., Grisel, O., Blondel, M., Prettenhofer,
P., Weiss, R., Dubourg, V., et al. (2011). Scikit-
learn: Machine learning in python. Journal of ma-
chine learning research, 12(Oct):2825–2830.
Pensia, A., Rajput, S., Nagle, A., Vishwakarma, H., and
Papailiopoulos, D. (2020). Optimal lottery tickets via
subset sum: Logarithmic over-parameterization is suf-
ficient. Advances in neural information processing
systems, 33:2599–2610.
Ramanujan, V., Wortsman, M., Kembhavi, A., Farhadi, A.,
and Rastegari, M. (2020). What’s hidden in a ran-
domly weighted neural network? In Proceedings
of the IEEE/CVF conference on computer vision and
pattern recognition, pages 11893–11902.
Seabold, S. and Perktold, J. (2010). statsmodels: Econo-
metric and statistical modeling with python. In 9th
Python in Science Conference.
Shevchenko, A. and Mondelli, M. (2020). Landscape con-
nectivity and dropout stability of sgd solutions for
over-parameterized neural networks. In International
Conference on Machine Learning, pages 8773–8784.
PMLR.
Tanaka, H., Kunin, D., Yamins, D. L., and Ganguli, S.
(2020). Pruning neural networks without any data by
iteratively conserving synaptic flow. Advances in neu-
ral information processing systems, 33:6377–6389.
Wang, C., Zhang, G., and Grosse, R. (2020a). Picking
winning tickets before training by preserving gradient
flow. arXiv preprint arXiv:2002.07376.
Wang, H., Qin, C., Bai, Y., Zhang, Y., and Fu, Y. (2021a).
Recent advances on neural network pruning at initial-
ization. arXiv preprint arXiv:2103.06460.
Wang, Y., Zhang, X., Xie, L., Zhou, J., Su, H., Zhang, B.,
and Hu, X. (2020b). Pruning from scratch. In Pro-
ceedings of the AAAI Conference on Artificial Intelli-
gence, volume 34, pages 12273–12280.
Wang, Z., Luo, T., Li, M., Zhou, J. T., Goh, R. S. M., and
Zhen, L. (2021b). Evolutionary multi-objective model
compression for deep neural networks. IEEE Compu-
tational Intelligence Magazine, 16(3):10–21.
Whitaker, T. (2022). Quantum neuron selection: find-
ing high performing subnetworks with quantum al-
gorithms. In Proceedings of the Genetic and Evolu-
tionary Computation Conference Companion, pages
2258–2264.
Whitley, D., Tin
´
os, R., and Chicano, F. (2015). Optimal
neuron selection: Nk echo state networks for rein-
forcement learning. arXiv preprint arXiv:1505.01887.
Wortsman, M., Farhadi, A., and Rastegari, M. (2019). Dis-
covering neural wirings. Advances in Neural Informa-
tion Processing Systems, 32.
Wu, T., Li, X., Zhou, D., Li, N., and Shi, J. (2021).
Differential evolution based layer-wise weight prun-
ing for compressing deep neural networks. Sensors,
21(3):880.
Zhou, H., Lan, J., Liu, R., and Yosinski, J. (2019). De-
constructing lottery tickets: Zeros, signs, and the su-
permask. Advances in neural information processing
systems, 32.
NCTA 2024 - 16th International Conference on Neural Computation Theory and Applications
460