Zeroth Order Optimization for Pretraining Language Models

Nathan Allaire, Mahsa Ghazvini Nejad, Sébastien Le Digabel, Vahid Partovi Nia

2025

Abstract

The physical memory for training Large Language Models (LLMs) grow with the model size, and are limited to the GPU memory. In particular, back-propagation that requires the computation of the first-order derivatives adds to this memory overhead. Training extremely large language models with memory-efficient algorithms is still a challenge with theoretical and practical implications. Back-propagation-free training algorithms, also known as zeroth-order methods, are recently examined to address this challenge. Their usefulness has been proven in fine-tuning of language models. However, so far, there has been no study for language model pretraining using zeroth-order optimization, where the memory constraint is manifested more severely. We build the connection between the second order, the first order, and the zeroth order theoretically. Then, we apply the zeroth order optimization to pre-training light-weight language models, and discuss why they cannot be readily applied. We show in particular that the curse of dimensionality is the main obstacle, and pave the way towards modifications of zeroth order methods for pre-training such models.

Download


Paper Citation


in Harvard Style

Allaire N., Ghazvini Nejad M., Le Digabel S. and Partovi Nia V. (2025). Zeroth Order Optimization for Pretraining Language Models. In Proceedings of the 14th International Conference on Pattern Recognition Applications and Methods - Volume 1: ICPRAM; ISBN 978-989-758-730-6, SciTePress, pages 113-121. DOI: 10.5220/0013261100003905


in Bibtex Style

@conference{icpram25,
author={Nathan Allaire and Mahsa Ghazvini Nejad and Sébastien Le Digabel and Vahid Partovi Nia},
title={Zeroth Order Optimization for Pretraining Language Models},
booktitle={Proceedings of the 14th International Conference on Pattern Recognition Applications and Methods - Volume 1: ICPRAM},
year={2025},
pages={113-121},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0013261100003905},
isbn={978-989-758-730-6},
}


in EndNote Style

TY - CONF

JO - Proceedings of the 14th International Conference on Pattern Recognition Applications and Methods - Volume 1: ICPRAM
TI - Zeroth Order Optimization for Pretraining Language Models
SN - 978-989-758-730-6
AU - Allaire N.
AU - Ghazvini Nejad M.
AU - Le Digabel S.
AU - Partovi Nia V.
PY - 2025
SP - 113
EP - 121
DO - 10.5220/0013261100003905
PB - SciTePress