Partial Tensorized Transformers for Natural Language Processing
Subhadra Vadlamannati, Ryan Solgi
2024
Abstract
The transformer architecture has revolutionized Natural Language Processing (NLP) and other machine-learning tasks, due to its unprecedented accuracy. However, their extensive memory and parameter requirements often hinder their practical applications. In this work, we study the effect of tensor-train decomposition to improve the accuracy and compress transformer vision-language neural networks, namely BERT and ViT. We focus both on embedding-layer compression and partial tensorization of neural networks (PTNN) through an algorithmic approach. Our novel PTNN approach significantly improves the accuracy of existing models by up to 5%, all without the need for post-training adjustments, breaking new ground in the field of tensor decomposition.
DownloadPaper Citation
in Harvard Style
Vadlamannati S. and Solgi R. (2024). Partial Tensorized Transformers for Natural Language Processing. In Proceedings of the 16th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART; ISBN 978-989-758-680-4, SciTePress, pages 543-547. DOI: 10.5220/0012366500003636
in Bibtex Style
@conference{icaart24,
author={Subhadra Vadlamannati and Ryan Solgi},
title={Partial Tensorized Transformers for Natural Language Processing},
booktitle={Proceedings of the 16th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART},
year={2024},
pages={543-547},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0012366500003636},
isbn={978-989-758-680-4},
}
in EndNote Style
TY - CONF
JO - Proceedings of the 16th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART
TI - Partial Tensorized Transformers for Natural Language Processing
SN - 978-989-758-680-4
AU - Vadlamannati S.
AU - Solgi R.
PY - 2024
SP - 543
EP - 547
DO - 10.5220/0012366500003636
PB - SciTePress