loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Alireza Ghaffari 1 ; Justin Yu 1 ; Mahsa Nejad 1 ; Masoud Asgharian 2 ; Boxing Chen 1 and Vahid Partovi Nia 1

Affiliations: 1 Huawei Noah’s Ark Lab, Montreal, Canada ; 2 Department of Mathematics and Statistics, McGill University, Montreal, Canada

Keyword(s): Accelerated Training, Compressed Training, Low-Precision Fine-Tuning, Language Models.

Abstract: Low-precision fine-tuning of language models has gained prominence as a cost-effective and energy-efficient approach to deploying large-scale models in various applications. However, this approach is susceptible to the existence of outlier values in activation. The outlier values in the activation can negatively affect the performance of fine-tuning language models in the low-precision regime since they affect the scaling factor and thus make representing smaller values harder. This paper investigates techniques for mitigating outlier activation in low-precision integer fine-tuning of the language models. Our proposed novel approach enables us to represent the outlier activation values in 8-bit integers instead of floating-point ( FP16) values. The benefit of using integers for outlier values is that it enables us to use operator tiling to avoid performing 16-bit integer matrix multiplication to address this problem effectively. We provide theoretical analysis and supporting experime nts to demonstrate the effectiveness of our approach in improving the robustness and performance of low-precision fine-tuned language models. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 18.227.209.84

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Ghaffari, A.; Yu, J.; Nejad, M.; Asgharian, M.; Chen, B. and Partovi Nia, V. (2024). Mitigating Outlier Activations in Low-Precision Fine-Tuning of Language Models. In Proceedings of the 13th International Conference on Pattern Recognition Applications and Methods - ICPRAM; ISBN 978-989-758-684-2; ISSN 2184-4313, SciTePress, pages 478-484. DOI: 10.5220/0012567700003654

@conference{icpram24,
author={Alireza Ghaffari. and Justin Yu. and Mahsa Nejad. and Masoud Asgharian. and Boxing Chen. and Vahid {Partovi Nia}.},
title={Mitigating Outlier Activations in Low-Precision Fine-Tuning of Language Models},
booktitle={Proceedings of the 13th International Conference on Pattern Recognition Applications and Methods - ICPRAM},
year={2024},
pages={478-484},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0012567700003654},
isbn={978-989-758-684-2},
issn={2184-4313},
}

TY - CONF

JO - Proceedings of the 13th International Conference on Pattern Recognition Applications and Methods - ICPRAM
TI - Mitigating Outlier Activations in Low-Precision Fine-Tuning of Language Models
SN - 978-989-758-684-2
IS - 2184-4313
AU - Ghaffari, A.
AU - Yu, J.
AU - Nejad, M.
AU - Asgharian, M.
AU - Chen, B.
AU - Partovi Nia, V.
PY - 2024
SP - 478
EP - 484
DO - 10.5220/0012567700003654
PB - SciTePress