to the nature of PIAs, our work has an explicit lim-
itation: it is tailored to one specific property of the
dataset. Our attack can be adapted to other properties
P as well, as long as the attacker can fabricate datasets
containing or not the given property.
REFERENCES
Ateniese, G., Mancini, L. V., Spognardi, A., Villani, A.,
Vitali, D., and Felici, G. (2015). Hacking smart ma-
chines with smarter ones: How to extract meaningful
data from machine learning classifiers. International
Journal of Security and Networks.
Eikenberry, S. E., Mancuso, M., Iboi, E., Phan, T., Eiken-
berry, K., Kuang, Y., Kostelich, E., and Gumel, A. B.
(2020). To mask or not to mask: Modeling the poten-
tial for face mask use by the general public to curtail
the covid-19 pandemic. Infectious Disease Modelling.
Fredrikson, M., Jha, S., and Ristenpart, T. (2015). Model
inversion attacks that exploit confidence information
and basic countermeasures. In Proceedings of the
22nd ACM SIGSAC Conference on Computer and
Communications Security.
Ganju, K., Wang, Q., Yang, W., Gunter, C. A., and Borisov,
N. (2018). Property inference attacks on fully con-
nected neural networks using permutation invariant
representations.
Geiping, J., Bauermeister, H., Dr
¨
oge, H., and Moeller,
M. (2020). Inverting gradients–how easy is it to
break privacy in federated learning? arXiv preprint
arXiv:2003.14053.
He, Y., Meng, G., Chen, K., Hu, X., and He, J. (2019). To-
wards privacy and security of deep learning systems:
a survey. arXiv preprint arXiv:1911.12562.
Hesketh, T. and Min, J. M. (2012). The effects of artificial
gender imbalance: Science & society series on sex and
science. EMBO reports.
Kim, T. and Yang, J. (2020). Selective feature anonymiza-
tion for privacy-preserving image data publishing.
Electronics.
Li, Z., Huang, Z., Chen, C., and Hong, C. (2019). Quan-
tification of the leakage in federated learning. arXiv
preprint arXiv:1910.05467.
Liu, Z., Luo, P., Wang, X., and Tang, X. (2015). Deep learn-
ing face attributes in the wild. In Proceedings of In-
ternational Conference on Computer Vision (ICCV).
Mehnaz, S., Li, N., and Bertino, E. (2020). Black-box
model inversion attribute inference attacks on classi-
fication models. arXiv preprint arXiv:2012.03404.
Mei, S. and Zhu, X. (2015). Using machine teaching to
identify optimal training-set attacks on machine learn-
ers. In Twenty-Ninth AAAI Conference on Artificial
Intelligence.
Melis, L., Song, C., De Cristofaro, E., and Shmatikov, V.
(2019). Exploiting unintended feature leakage in col-
laborative learning.
Murakonda, S. K., Shokri, R., and Theodorakopoulos, G.
(2019). Ultimate power of inference attacks: Privacy
risks of learning high-dimensional graphical models.
arXiv preprint arXiv:1905.12774.
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik,
Z. B., and Swami, A. (2016). Practical black-box at-
tacks against deep learning systems using adversarial
examples. arXiv preprint arXiv:1602.02697.
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik,
Z. B., and Swami, A. (2017). Practical black-box at-
tacks against machine learning. In Proceedings of the
2017 ACM on Asia conference on computer and com-
munications security.
Pej
´
o, B. (2020). The good, the bad, and the ugly: Qual-
ity inference in federated learning. arXiv preprint
arXiv:2007.06236.
Rigaki, M. and Garcia, S. (2020). A survey of pri-
vacy attacks in machine learning. arXiv preprint
arXiv:2007.07646.
Shumailov, I., Zhao, Y., Bates, D., Papernot, N., Mullins,
R., and Anderson, R. (2020). Sponge examples:
Energy-latency attacks on neural networks. arXiv
preprint arXiv:2006.03463.
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Er-
han, D., Goodfellow, I., and Fergus, R. (2013). In-
triguing properties of neural networks. arXiv preprint
arXiv:1312.6199.
Truex, S., Liu, L., Gursoy, M. E., Yu, L., and Wei, W.
(2018). Towards demystifying membership inference
attacks. arXiv preprint arXiv:1807.09173.
Wang, B. and Gong, N. Z. (2018). Stealing hyperparameters
in machine learning. In 2018 IEEE Symposium on
Security and Privacy (SP).
Wang, L., Xu, S., Wang, X., and Zhu, Q. (2019a). Eaves-
drop the composition proportion of training labels in
federated learning. arXiv preprint arXiv:1910.06044.
Wang, S. (2020). Gender obfuscation through face morph-
ing. Master’s thesis, University of Twente.
Wang, Z., Song, M., Zhang, Z., Song, Y., Wang, Q., and
Qi, H. (2019b). Beyond inferring class representa-
tives: User-level privacy leakage from federated learn-
ing. In IEEE INFOCOM 2019-IEEE Conference on
Computer Communications.
Zhang, Y., Jia, R., Pei, H., Wang, W., Li, B., and Song,
D. (2020). The secret revealer: Generative model-
inversion attacks against deep neural networks. In
Proceedings of the IEEE/CVF Conference on Com-
puter Vision and Pattern Recognition.
Zhu, L. and Han, S. (2020). Deep leakage from gradients.
In Federated Learning.
Property Inference Attacks on Convolutional Neural Networks: Influence and Implications of Target Model’s Complexity
721