A Mixed Quantization Approach for Data-Free Quantization of LLMs

Feng Zhang, Yanbin Liu, Weihua Li, Xiaodan Wang, Quan Bai

2025

Abstract

Large Language Models (LLMs) have demonstrated significant capabilities in intelligent activities such as natural language comprehension, content generation, and knowledge retrieval. However, training and deploying these models require substantial computation resources, setting up a significant barrier for developing AI applications and conducting research. Various model compression techniques have been developed to address the demanding computational resource issue. Nonetheless, there has been limited exploration into high-level quantization strategy to offer better flexibility of balancing the trade-off between memory usage and accuracy. We propose an effective mixed-quantization method named MXQ to bridge this research gap for a better memory-accuracy balance. Specifically, we observe that the weight distributions of LLMs vary considerably from layer to layer, resulting in different tolerances to quantization errors. Motivated by this, we derive a novel quantization optimisation formulation to solve for the layer-wise quantization parameters, while enforcing the overall quantization memory consumption budget into the constraints. The new formulation can be efficiently solved by converting to a mixed integer programming problem. Experiments shows that our method can achieve the 1% accuracy loss goal with additional bit budget or further reduce memory usage on Llama models. This unlocks a wide range of quantization options and simplifies memory-accuracy trade-off.

Download


Paper Citation


in Harvard Style

Zhang F., Liu Y., Li W., Wang X. and Bai Q. (2025). A Mixed Quantization Approach for Data-Free Quantization of LLMs. In Proceedings of the 17th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART; ISBN 978-989-758-737-5, SciTePress, pages 353-363. DOI: 10.5220/0013159100003890


in Bibtex Style

@conference{icaart25,
author={Feng Zhang and Yanbin Liu and Weihua Li and Xiaodan Wang and Quan Bai},
title={A Mixed Quantization Approach for Data-Free Quantization of LLMs},
booktitle={Proceedings of the 17th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART},
year={2025},
pages={353-363},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0013159100003890},
isbn={978-989-758-737-5},
}


in EndNote Style

TY - CONF

JO - Proceedings of the 17th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART
TI - A Mixed Quantization Approach for Data-Free Quantization of LLMs
SN - 978-989-758-737-5
AU - Zhang F.
AU - Liu Y.
AU - Li W.
AU - Wang X.
AU - Bai Q.
PY - 2025
SP - 353
EP - 363
DO - 10.5220/0013159100003890
PB - SciTePress