data from machine learning classifiers. International
Journal of Security and Networks, 10(3):137–150.
Avent, B., Korolova, A., Zeber, D., Hovden, T., and
Livshits, B. (2017). {BLENDER}: Enabling local
search with a hybrid differential privacy model. In
26th USENIX Security Symposium (USENIX Security
17), pages 747–764.
Bouhaddi, M. and Adi, K. (2023). Mitigating membership
inference attacks in machine learning as a service. In
2023 IEEE International Conference on Cyber Secu-
rity and Resilience (CSR), pages 262–268. IEEE.
Carlini, N., Liu, C., Erlingsson,
´
U., Kos, J., and Song, D.
(2019). The secret sharer: Evaluating and testing un-
intended memorization in neural networks. In 28th
USENIX Security Symposium (USENIX Security 19),
pages 267–284.
Chen, J., Li, K., and Philip, S. Y. (2021). Privacy-preserving
deep learning model for decentralized vanets using
fully homomorphic encryption and blockchain. IEEE
Transactions on Intelligent Transportation Systems,
23(8):11633–11642.
du Pin Calmon, F. and Fawaz, N. (2012). Privacy against
statistical inference. In 2012 50th annual Allerton
conference on communication, control, and comput-
ing (Allerton), pages 1401–1408. IEEE.
Dunis, C., Middleton, P. W., Karathanasopolous, A., and
Theofilatos, K. (2016). Artificial intelligence in finan-
cial markets. Springer.
Dwork, C., Roth, A., et al. (2014). The algorithmic founda-
tions of differential privacy. Foundations and Trends®
in Theoretical Computer Science, 9(3–4):211–407.
Farnadi, G., Sitaraman, G., Sushmita, S., Celli, F., Kosin-
ski, M., Stillwell, D., Davalos, S., Moens, M.-F.,
and De Cock, M. (2016). Computational personal-
ity recognition in social media. User modeling and
user-adapted interaction, 26:109–142.
Fredrikson, M., Lantz, E., Jha, S., Lin, S., Page, D., and
Ristenpart, T. (2014). Privacy in pharmacogenetics:
An {End-to-End} case study of personalized war-
farin dosing. In 23rd USENIX security symposium
(USENIX Security 14), pages 17–32.
Gong, N. Z. and Liu, B. (2016). You are who you know and
how you behave: Attribute inference attacks via users’
social friends and behaviors. In 25th USENIX Security
Symposium (USENIX Security 16), pages 979–995.
Hildebrandt, M. (2018). Law as computation in the era
of artificial legal intelligence: Speaking law to the
power of statistics. University of Toronto Law Jour-
nal, 68(supplement 1):12–35.
Hu, H., Salcic, Z., Sun, L., Dobbie, G., Yu, P. S., and Zhang,
X. (2022). Membership inference attacks on machine
learning: A survey. ACM Computing Surveys (CSUR),
54(11s):1–37.
Jayaraman, B. (2022). Texas-100x data set.
https://github.com/bargavj/texas100x. Accessed:
2023-08-24.
Jayaraman, B. and Evans, D. (2019). Evaluating differen-
tially private machine learning in practice. In 28th
USENIX Security Symposium (USENIX Security 19),
pages 1895–1912.
Jayaraman, B. and Evans, D. (2022). Are attribute inference
attacks just imputation? In Proceedings of the 2022
ACM SIGSAC Conference on Computer and Commu-
nications Security, pages 1569–1582.
Jia, J. and Gong, N. Z. (2018). {AttriGuard}: A practi-
cal defense against attribute inference attacks via ad-
versarial machine learning. In 27th USENIX Security
Symposium (USENIX Security 18), pages 513–529.
Jia, J., Salem, A., Backes, M., Zhang, Y., and Gong, N. Z.
(2019). Memguard: Defending against black-box
membership inference attacks via adversarial exam-
ples. In Proceedings of the 2019 ACM SIGSAC con-
ference on computer and communications security,
pages 259–274.
Kosinski, M., Stillwell, D., and Graepel, T. (2013). Pri-
vate traits and attributes are predictable from digital
records of human behavior. Proceedings of the na-
tional academy of sciences, 110(15):5802–5805.
Linden, G., Smith, B., and York, J. (2003). Amazon. com
recommendations: Item-to-item collaborative filter-
ing. IEEE Internet computing, 7(1):76–80.
Liu, Y., Chen, X., Liu, C., and Song, D. (2016). Delving
into transferable adversarial examples and black-box
attacks. arXiv preprint arXiv:1611.02770.
Mahajan, A. S. D., Tople, S., and Sharma, A. (2020). Does
learning stable features provide privacy benefits for
machine learning models. In NeurIPS PPML Work-
shop.
Malekzadeh, M., Borovykh, A., and G
¨
und
¨
uz, D. (2021).
Honest-but-curious nets: Sensitive attributes of pri-
vate inputs can be secretly coded into the classifiers’
outputs. In Proceedings of the 2021 ACM SIGSAC
Conference on Computer and Communications Secu-
rity, pages 825–844.
Mehnaz, S., Dibbo, S. V., De Viti, R., Kabir, E., Branden-
burg, B. B., Mangard, S., Li, N., Bertino, E., Backes,
M., De Cristofaro, E., et al. (2022). Are your sensi-
tive attributes private? novel model inversion attribute
inference attacks on classification models. In 31st
USENIX Security Symposium (USENIX Security 22),
pages 4579–4596.
Papernot, N., McDaniel, P., and Goodfellow, I. (2016).
Transferability in machine learning: from phenomena
to black-box attacks using adversarial samples. arXiv
preprint arXiv:1605.07277.
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik,
Z. B., and Swami, A. (2017). Practical black-box at-
tacks against machine learning. In Proceedings of the
2017 ACM on Asia conference on computer and com-
munications security, pages 506–519.
Rivest, R. L., Adleman, L., Dertouzos, M. L., et al. (1978).
On data banks and privacy homomorphisms. Founda-
tions of secure computation, 4(11):169–180.
Salamatian, S., Zhang, A., du Pin Calmon, F., Bhamidi-
pati, S., Fawaz, N., Kveton, B., Oliveira, P., and Taft,
N. (2015). Managing your private and public data:
Bringing down inference attacks against your privacy.
IEEE Journal of Selected Topics in Signal Processing,
9(7):1240–1255.
Enhancing Privacy in Machine Learning: A Robust Approach for Preventing Attribute Inference Attacks
235