Defense Against Model Inversion Attacks Using a Dummy Recognition Model Trained with Synthetic Samples

Yuta Kotsuji, Kazuaki Nakamura

2025

Abstract

Recently, biometric recognition models such as face identification models have been rapidly developing. At the same time, the risk of cyber-attacks on such models is increasing, one of whose examples is a model inversion attack (MIA). MIA is an attack to reconstruct or reveal the training samples of a victim recognition model by analyzing the relationship between its inputs and outputs. When MIA is conducted on a biometric model, its training samples such as the face, iris, and fingerprint images could be leaked. Since they are privacy-sensitive personal information, their leakage causes a serious privacy issue. Hence, it is desirable to develop a defense method against MIA. Although several defense methods have been proposed in the past decade, they tend to decrease the recognition accuracy of the victim model. To solve this problem, in this paper, we propose to use a dummy model trained with synthetic images and parallelly combine it with the victim model, where the combined model is released to users instead of the victim model. The key point of our proposed method is to force the dummy model to output a high confidence score only for the limited range of synthetic images. This allows us to maintain the recognition accuracy of the combined model. We experimentally confirmed that the proposed method can reduce the success rate of MIA to less than 30% while maintaining the recognition accuracy of more than 95%.

Download


Paper Citation


in Harvard Style

Kotsuji Y. and Nakamura K. (2025). Defense Against Model Inversion Attacks Using a Dummy Recognition Model Trained with Synthetic Samples. In Proceedings of the 20th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP; ISBN 978-989-758-728-3, SciTePress, pages 883-892. DOI: 10.5220/0013160400003912


in Bibtex Style

@conference{visapp25,
author={Yuta Kotsuji and Kazuaki Nakamura},
title={Defense Against Model Inversion Attacks Using a Dummy Recognition Model Trained with Synthetic Samples},
booktitle={Proceedings of the 20th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP},
year={2025},
pages={883-892},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0013160400003912},
isbn={978-989-758-728-3},
}


in EndNote Style

TY - CONF

JO - Proceedings of the 20th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP
TI - Defense Against Model Inversion Attacks Using a Dummy Recognition Model Trained with Synthetic Samples
SN - 978-989-758-728-3
AU - Kotsuji Y.
AU - Nakamura K.
PY - 2025
SP - 883
EP - 892
DO - 10.5220/0013160400003912
PB - SciTePress