Authors:
Vahideh Moghtadaiee
1
;
Amir Fathalizadeh
1
and
Mina Alishahi
2
Affiliations:
1
Cyberspace Research Institute, Shahid Beheshti University, Tehran, Iran
;
2
Department of Computer Science, Open Universiteit, Amsterdam, The Netherlands
Keyword(s):
Membership Inference Attack, Indoor Localization, Differential Privacy, Location Privacy.
Abstract:
With the widespread adoption of location-based services and the increasing demand for indoor positioning systems, the need to protect indoor location privacy has become crucial. One metric used to assess a dataset’s resistance against leaking individuals’ information is the Membership Inference Attack (MIA). In this paper, we provide a comprehensive examination of MIA on indoor location privacy, evaluating their effectiveness in extracting sensitive information about individuals’ locations. We investigate the vulnerability of indoor location datasets under white-box and black-box attack settings. Additionally, we analyze MIA results after employing Differential Privacy (DP) to privatize the original indoor location training data. Our findings demonstrate that DP can act as a defense mechanism, especially against black-box MIA, reducing the efficiency of MIA on indoor location models. We conduct extensive experimental tests on three real-world indoor localization datasets to assess MI
A in terms of the model architecture, the nature of the data, and the specific characteristics of the training datasets.
(More)