A Protection against the Extraction of Neural Network Models
Hervé Chabanne, Hervé Chabanne, Vincent Despiegel, Linda Guiga, Linda Guiga
2021
Abstract
Given oracle access to a Neural Network (NN), it is possible to extract its underlying model. We here introduce a protection by adding parasitic layers which keep the underlying NN’s predictions mostly unchanged while complexifying the task of reverse-engineering. Our countermeasure relies on approximating a noisy identity mapping with a Convolutional NN. We explain why the introduction of new parasitic layers complexifies the attacks. We report experiments regarding the performance and the accuracy of the protected NN.
DownloadPaper Citation
in Harvard Style
Chabanne H., Despiegel V. and Guiga L. (2021). A Protection against the Extraction of Neural Network Models.In Proceedings of the 7th International Conference on Information Systems Security and Privacy - Volume 1: ICISSP, ISBN 978-989-758-491-6, pages 258-269. DOI: 10.5220/0010373302580269
in Bibtex Style
@conference{icissp21,
author={Hervé Chabanne and Vincent Despiegel and Linda Guiga},
title={A Protection against the Extraction of Neural Network Models},
booktitle={Proceedings of the 7th International Conference on Information Systems Security and Privacy - Volume 1: ICISSP,},
year={2021},
pages={258-269},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010373302580269},
isbn={978-989-758-491-6},
}
in EndNote Style
TY - CONF
JO - Proceedings of the 7th International Conference on Information Systems Security and Privacy - Volume 1: ICISSP,
TI - A Protection against the Extraction of Neural Network Models
SN - 978-989-758-491-6
AU - Chabanne H.
AU - Despiegel V.
AU - Guiga L.
PY - 2021
SP - 258
EP - 269
DO - 10.5220/0010373302580269