In USENIX Security Symposium, pages 515–532.
USENIX Association.
Biham, E. and Shamir, A. (1993). Differential Cryptanaly-
sis of the Data Encryption Standard. Springer.
Breier, J., Jap, D., Hou, X., Bhasin, S., and Liu, Y. (2020).
SNIFF: reverse engineering of neural networks with
fault attacks. CoRR, abs/2002.11021.
Carlini, N., Jagielski, M., and Mironov, I. (2020). Crypt-
analytic extraction of neural network models. CoRR,
abs/2003.04884.
Gong, Y., Liu, L., Yang, M., and Bourdev, L. D. (2014).
Compressing deep convolutional networks using vec-
tor quantization. CoRR, abs/1412.6115.
Goodfellow, I. J., Shlens, J., and Szegedy, C. (2015). Ex-
plaining and harnessing adversarial examples. In Ben-
gio, Y. and LeCun, Y., editors, 3rd International Con-
ference on Learning Representations, ICLR 2015, San
Diego, CA, USA, May 7-9, 2015, Conference Track
Proceedings.
Han, S., Mao, H., and Dally, W. J. (2016). Deep compres-
sion: Compressing deep neural network with pruning,
trained quantization and huffman coding. In ICLR.
He, Z., Rakin, A. S., and Fan, D. (2019). Parametric noise
injection: Trainable randomness to improve deep neu-
ral network robustness against adversarial attack. In
IEEE Conference on Computer Vision and Pattern
Recognition, CVPR 2019, Long Beach, CA, USA, June
16-20, 2019, pages 588–597. Computer Vision Foun-
dation / IEEE.
Hong, S., Davinroy, M., Kaya, Y., Dachman-Soled, D., and
Dumitras, T. (2020). How to 0wn NAS in your spare
time. CoRR, abs/2002.06776.
Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., and
Bengio, Y. (2017). Quantized neural networks: Train-
ing neural networks with low precision weights and
activations. J. Mach. Learn. Res., 18:187:1–187:30.
Jacob, B., Kligys, S., Chen, B., Zhu, M., Tang, M., Howard,
A. G., Adam, H., and Kalenichenko, D. (2018). Quan-
tization and training of neural networks for efficient
integer-arithmetic-only inference. In CVPR, pages
2704–2713. IEEE Computer Society.
Jagielski, M., Carlini, N., Berthelot, D., Kurakin, A., and
Papernot, N. (2019). High-fidelity extraction of neural
network models. CoRR, abs/1909.01838.
Kaspersky (2020). Machine learning methods for malware
detection. Whitepaper.
Lecun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998).
Gradient-based learning applied to document recogni-
tion. In Proceedings of the IEEE, pages 2278–2324.
LeCun, Y., Cortes, C., and Burges, C. (2010). Mnist hand-
written digit database. ATT Labs [Online]. Available:
http://yann. lecun. com/exdb/mnist, 2.
LeNail, A. (2019). Nn-svg: Publication-ready neural net-
work architecture schematics. Journal of Open Source
Software, 4(33):747.
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and
Vladu, A. (2018). Towards deep learning models
resistant to adversarial attacks. In 6th International
Conference on Learning Representations, ICLR 2018,
Vancouver, BC, Canada, April 30 - May 3, 2018, Con-
ference Track Proceedings. OpenReview.net.
Milli, S., Schmidt, L., Dragan, A. D., and Hardt, M. (2019).
Model reconstruction from model explanations. In
FAT, pages 1–9. ACM.
Miyato, T., Maeda, S., Koyama, M., and Ishii, S. (2019).
Virtual adversarial training: A regularization method
for supervised and semi-supervised learning. IEEE
Trans. Pattern Anal. Mach. Intell., 41(8):1979–1993.
Moosavi-Dezfooli, S., Fawzi, A., and Frossard, P. (2015).
Deepfool: a simple and accurate method to fool deep
neural networks. CoRR, abs/1511.04599.
Papernot, N., McDaniel, P. D., Jha, S., Fredrikson, M., Ce-
lik, Z. B., and Swami, A. (2016). The limitations of
deep learning in adversarial settings. In IEEE Euro-
pean Symposium on Security and Privacy, EuroS&P
2016, Saarbr
¨
ucken, Germany, March 21-24, 2016,
pages 372–387. IEEE.
Rolnick, D. and K
¨
ording, K. P. (2019). Reverse-engineering
deep relu networks. CoRR, abs/1910.00744.
Shamir, A., Safran, I., Ronen, E., and Dunkelman, O.
(2019). A simple explanation for the existence of
adversarial examples with small hamming distance.
CoRR, abs/1901.10861.
Simonyan, K. and Zisserman, A. (2015). Very deep con-
volutional networks for large-scale image recognition.
CoRR.
Yan, M., Fletcher, C. W., and Torrellas, J. (2018). Cache
telepathy: Leveraging shared resource attacks to learn
DNN architectures. CoRR, abs/1808.04761.
Zhang, C., Bengio, S., Hardt, M., and Singer, Y.
(2019). Identity crisis: Memorization and general-
ization under extreme overparameterization. CoRR,
abs/1902.04698.
Zhou, S., Ni, Z., Zhou, X., Wen, H., Wu, Y., and Zou, Y.
(2016). Dorefa-net: Training low bitwidth convolu-
tional neural networks with low bitwidth gradients.
CoRR, abs/1606.06160.
A Protection against the Extraction of Neural Network Models
269