The Concept of Identifiability in ML Models
Stephanie von Maltzan
2022
Abstract
Recent research indicates that the machine learning process can be reversed by adversarial attacks. These attacks can be used to derive personal information from the training. The supposedly anonymising machine learning process represents a process of pseudonymisation and is, therefore, subject to technical and organisational measures. Consequently, the unexamined belief in anonymisation as a guarantor for privacy cannot be easily upheld. It is, therefore, crucial to measure privacy through the lens of adversarial attacks and precisely distinguish what is meant by personal data and non-personal data and above all determine whether ML models represent pseudonyms from the training data.
DownloadPaper Citation
in Harvard Style
von Maltzan S. (2022). The Concept of Identifiability in ML Models. In Proceedings of the 7th International Conference on Internet of Things, Big Data and Security - Volume 1: IoTBDS, ISBN 978-989-758-564-7, pages 215-222. DOI: 10.5220/0011081600003194
in Bibtex Style
@conference{iotbds22,
author={Stephanie von Maltzan},
title={The Concept of Identifiability in ML Models},
booktitle={Proceedings of the 7th International Conference on Internet of Things, Big Data and Security - Volume 1: IoTBDS,},
year={2022},
pages={215-222},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011081600003194},
isbn={978-989-758-564-7},
}
in EndNote Style
TY - CONF
JO - Proceedings of the 7th International Conference on Internet of Things, Big Data and Security - Volume 1: IoTBDS,
TI - The Concept of Identifiability in ML Models
SN - 978-989-758-564-7
AU - von Maltzan S.
PY - 2022
SP - 215
EP - 222
DO - 10.5220/0011081600003194