LibSteal: Model Extraction Attack Towards Deep Learning Compilers by Reversing DNN Binary Library

Jinquan Zhang, Pei Wang, Dinghao Wu

2023

Abstract

The need for Deep Learning (DL) based services has rapidly increased in the past years. As part of the trend, the privatization of Deep Neural Network (DNN) models has become increasingly popular. The authors give customers or service providers direct access to their created models and let them deploy models on devices or infrastructure out of the control of the authors. Meanwhile, the emergence of DL Compilers makes it possible to compile a DNN model into a lightweight binary for faster inference, which is attractive to many stakeholders. However, distilling the essence of a model into a binary that is free to be examined by untrusted parties creates a chance to leak essential information. With only DNN binary library, it is possible to extract neural network architecture using reverse engineering. In this paper, we present LibSteal. This framework can leak DNN architecture information by reversing the binary library generated from the DL Compiler, which is similar to or even equivalent to the original. The evaluation shows that LibSteal can efficiently steal the architecture information of victim DNN models. After training the extracted models with the same hyper-parameter, we can achieve accuracy comparable to that of the original models.

Download


Paper Citation


in Harvard Style

Zhang J., Wang P. and Wu D. (2023). LibSteal: Model Extraction Attack Towards Deep Learning Compilers by Reversing DNN Binary Library. In Proceedings of the 18th International Conference on Evaluation of Novel Approaches to Software Engineering - Volume 1: ENASE, ISBN 978-989-758-647-7, SciTePress, pages 283-292. DOI: 10.5220/0011754900003464


in Bibtex Style

@conference{enase23,
author={Jinquan Zhang and Pei Wang and Dinghao Wu},
title={LibSteal: Model Extraction Attack Towards Deep Learning Compilers by Reversing DNN Binary Library},
booktitle={Proceedings of the 18th International Conference on Evaluation of Novel Approaches to Software Engineering - Volume 1: ENASE,},
year={2023},
pages={283-292},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011754900003464},
isbn={978-989-758-647-7},
}


in EndNote Style

TY - CONF

JO - Proceedings of the 18th International Conference on Evaluation of Novel Approaches to Software Engineering - Volume 1: ENASE,
TI - LibSteal: Model Extraction Attack Towards Deep Learning Compilers by Reversing DNN Binary Library
SN - 978-989-758-647-7
AU - Zhang J.
AU - Wang P.
AU - Wu D.
PY - 2023
SP - 283
EP - 292
DO - 10.5220/0011754900003464
PB - SciTePress