Authors:
Sofiane Lounici
1
;
Mohamed Njeh
2
;
Orhan Ermis
2
;
Melek Önen
2
and
Slim Trabelsi
1
Affiliations:
1
SAP Security Research, France
;
2
EURECOM, France
Keyword(s):
Watermarking, Neural Networks, Privacy.
Abstract:
With the development of machine learning models for task automation, watermarking appears to be a suitable solution to protect one’s own intellectual property. Indeed, by embedding secret specific markers into the model, the model owner is able to analyze the behavior of any model on these markers, called trigger instances and hence claim its ownership if this is the case. However, in the context of a Machine Learning as a Service (MLaaS) platform where models are available for inference, an attacker could forge such proofs in order to steal the ownership of these watermarked models in order to make a profit out of it. This type of attacks, called watermark forging attacks, is a serious threat against the intellectual property of models owners. Current work provides limited solutions to this problem: They constrain model owners to disclose either their models or their trigger set to a third party. In this paper, we propose counter-measures against watermark forging attacks, in a blac
k-box environment and compatible with privacy-preserving machine learning where both the model weights and the inputs could be kept private. We show that our solution successfully prevents two different types of watermark forging attacks with minimalist assumptions regarding either the access to the model’s weight or the content of the trigger set.
(More)