Authors:
Myria Bouhaddi
and
Kamel Adi
Affiliation:
Computer Security Research Laboratory, University of Quebec in Outaouais, Gatineau, Quebec, Canada
Keyword(s):
Machine Learning Security, Attribute Inference Attacks, Confidence Masking, Adversarial Machine Learning.
Abstract:
Machine learning (ML) models, widely used in sectors like healthcare, finance, and smart city development, face significant privacy risks due to their use of crowdsourced data containing sensitive information. These models are particularly susceptible to attribute inference attacks, where adversaries use model predictions and public or acquired metadata to uncover sensitive attributes such as locations or political affiliations. In response, our study proposes a novel, two-phased defense mechanism designed to efficiently balance data utility with privacy. Initially, our approach identifies the minimal level of noise needed in the prediction score to thwart an adversary’s classifier. This threshold is determined using adversarial ML techniques. We then enhance privacy by injecting noise based on a probability distribution derived from a constrained convex optimization problem. To validate the effectiveness of our privacy mechanism, we conducted extensive experiments using real-world d
atasets. Our results indicate that our defense model significantly outperforms existing methods, and additionally demonstrates its adaptability to various data types.
(More)