Efficient Neural Network Training via Subset Pretraining

Jan Spörer, Bernhard Bermeitinger, Tomas Hrycej, Niklas Limacher, Siegfried Handschuh

2024

Abstract

In training neural networks, it is common practice to use partial gradients computed over batches, mostly very small subsets of the training set. This approach is motivated by the argument that such a partial gradient is close to the true one, with precision growing only with the square root of the batch size. A theoretical justification is with the help of stochastic approximation theory. However, the conditions for the validity of this theory are not satisfied in the usual learning rate schedules. Batch processing is also difficult to combine with efficient second-order optimization methods. This proposal is based on another hypothesis: the loss minimum of the training set can be expected to be well-approximated by the minima of its subsets. Such subset minima can be computed in a fraction of the time necessary for optimizing over the whole training set. This hypothesis has been tested with the help of the MNIST, CIFAR-10, and CIFAR-100 image classification benchmarks, optionally extended by training data augmentation. The experiments have confirmed that results equivalent to conventional training can be reached. In summary, even small subsets are representative if the overdetermination ratio for the given model parameter set sufficiently exceeds unity. The computing expense can be reduced to a tenth or less.

Download


Paper Citation


in Harvard Style

Spörer J., Bermeitinger B., Hrycej T., Limacher N. and Handschuh S. (2024). Efficient Neural Network Training via Subset Pretraining. In Proceedings of the 16th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management - Volume 1: KDIR; ISBN 978-989-758-716-0, SciTePress, pages 242-249. DOI: 10.5220/0012893600003838


in Bibtex Style

@conference{kdir24,
author={Jan Spörer and Bernhard Bermeitinger and Tomas Hrycej and Niklas Limacher and Siegfried Handschuh},
title={Efficient Neural Network Training via Subset Pretraining},
booktitle={Proceedings of the 16th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management - Volume 1: KDIR},
year={2024},
pages={242-249},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0012893600003838},
isbn={978-989-758-716-0},
}


in EndNote Style

TY - CONF

JO - Proceedings of the 16th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management - Volume 1: KDIR
TI - Efficient Neural Network Training via Subset Pretraining
SN - 978-989-758-716-0
AU - Spörer J.
AU - Bermeitinger B.
AU - Hrycej T.
AU - Limacher N.
AU - Handschuh S.
PY - 2024
SP - 242
EP - 249
DO - 10.5220/0012893600003838
PB - SciTePress