ods. In Proceedings of the 10th ACM workshop on
artificial intelligence and security, pages 3–14.
Carlini, N. and Wagner, D. (2017b). Towards evaluating the
robustness of neural networks. In 2017 ieee sympo-
sium on security and privacy (sp), pages 39–57. Ieee.
Ding, A. (2022). Trustworthy Cyber-Physical Systems Via
Physics-Aware and AI-Powered Security. PhD thesis,
Rutgers The State University of New Jersey, School
of Graduate Studies.
Ding, A., Chan, M., Hass, A., Tippenhauer, N. O., Ma, S.,
and Zonouz, S. (2023a). Get your cyber-physical tests
done! data-driven vulnerability assessment of robotic
aerial vehicles. In 2023 53rd Annual IEEE/IFIP In-
ternational Conference on Dependable Systems and
Networks (DSN), pages 67–80. IEEE.
Ding, A., Hass, A., Chan, M., Sehatbakhsh, N., and
Zonouz, S. (2023b). Resource-aware dnn partitioning
for privacy-sensitive edge-cloud systems. In Interna-
tional Conference on Neural Information Processing,
pages 188–201. Springer.
Ding, A., Murthy, P., Garcia, L., Sun, P., Chan, M., and
Zonouz, S. (2021). Mini-me, you complete me! data-
driven drone security via dnn-based approximate com-
puting. In Proceedings of the 24th International Sym-
posium on Research in Attacks, Intrusions and De-
fenses, pages 428–441.
Guo, C., Rana, M., Cisse, M., and Van Der Maaten, L.
(2017). Countering adversarial images using input
transformations. arXiv preprint arXiv:1711.00117.
He, W., Wei, J., Chen, X., Carlini, N., and Song, D. (2017).
Adversarial example defense: Ensembles of weak de-
fenses are not strong. In 11th USENIX workshop on
offensive technologies (WOOT 17).
Kurakin, A., Goodfellow, I. J., and Bengio, S. (2018). Ad-
versarial examples in the physical world. In Artificial
intelligence safety and security, pages 99–112.
Liu, H., Li, Z., Xie, Y., Jiang, R., Wang, Y., Guo, X., and
Chen, Y. (2020). Livescreen: Video chat liveness
detection leveraging skin reflection. In IEEE INFO-
COM 2020-IEEE Conference on Computer Commu-
nications, pages 1083–1092. IEEE.
Luo, Y. and Pfister, H. (2018). Adversarial defense of image
classification using a variational auto-encoder. arXiv
preprint arXiv:1812.02891.
Ma, X., Karimpour, A., and Wu, Y.-J. (2023). Eliminating
the impacts of traffic volume variation on before and
after studies: a causal inference approach. Journal of
Intelligent Transportation Systems, pages 1–15.
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and
Vladu, A. (2017). Towards deep learning mod-
els resistant to adversarial attacks. arXiv preprint
arXiv:1706.06083.
Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik,
Z. B., and Swami, A. (2016). The limitations of deep
learning in adversarial settings. In 2016 IEEE Euro-
pean symposium on security and privacy (EuroS&P),
pages 372–387. IEEE.
PGD-Attack. https://github.com/MadryLab/cifar10 chall
enge/blob/master/pgd attack.py.
Prakash, A., Moran, N., Garber, S., DiLillo, A., and Storer,
J. (2018). Deflecting adversarial attacks with pixel de-
flection. In Proceedings of the IEEE conference on
computer vision and pattern recognition, pages 8571–
8580.
Raff, E., Sylvester, J., Forsyth, S., and McLean, M. (2019).
Barrage of random transforms for adversarially robust
defense. In Proceedings of the IEEE/CVF Conference
on Computer Vision and Pattern Recognition.
RobustML. https://www.robust-ml.org.
Shafahi, A., Najibi, M., Xu, Z., Dickerson, J., Davis, L. S.,
and Goldstein, T. (2020). Universal adversarial train-
ing. In Proceedings of the AAAI Conference on Artifi-
cial Intelligence, volume 34, pages 5636–5643.
Sitawarin, C., Golan-Strieb, Z. J., and Wagner, D. (2022).
Demystifying the adversarial robustness of random
transformation defenses. In International Conference
on Machine Learning, pages 20232–20252. PMLR.
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Er-
han, D., Goodfellow, I., and Fergus, R. (2013). In-
triguing properties of neural networks. arXiv preprint
arXiv:1312.6199.
Tang, M., Dai, A., DiValentin, L., Ding, A., Hass, A.,
Gong, N. Z., and Chen, Y. Modelguard: Information-
theoretic defense against model extraction attacks.
33rd USENIX Security Symposium (Security 2024).
Tang, M., Zhang, J., Ma, M., DiValentin, L., Ding, A.,
Hassanzadeh, A., Li, H., and Chen, Y. (2022). Fade:
Enabling large-scale federated adversarial training on
resource-constrained edge devices. arXiv preprint
arXiv:2209.03839.
Tang, Z., Feng, X., Xie, Y., Phan, H., Guo, T., Yuan, B., and
Wei, S. (2020). Vvsec: Securing volumetric video
streaming via benign use of adversarial perturbation.
In Proceedings of the 28th ACM International Confer-
ence on Multimedia, pages 3614–3623.
Wang, D., Li, C., Wen, S., Nepal, S., and Xiang, Y. (2020).
Defending against adversarial attack towards deep
neural networks via collaborative multi-task training.
IEEE Transactions on Dependable and Secure Com-
puting, 19(2):953–965.
Xie, C., Wang, J., Zhang, Z., Ren, Z., and Yuille, A. (2017).
Mitigating adversarial effects through randomization.
arXiv preprint arXiv:1711.01991.
Xu, W., Evans, D., and Qi, Y. (2017). Feature squeez-
ing: Detecting adversarial examples in deep neural
networks. arXiv preprint arXiv:1704.01155.
Zang, X., Yin, M., Huang, L., Yu, J., Zonouz, S., and Yuan,
B. (2022). Robot motion planning as video prediction:
A spatio-temporal neural network-based motion plan-
ner. In 2022 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS), pages 12492–
12499. IEEE.
Zhang, Z., Qiao, S., Xie, C., Shen, W., Wang, B., and Yuille,
A. L. (2018). Single-shot object detection with en-
riched semantics. In Proceedings of the IEEE con-
ference on computer vision and pattern recognition,
pages 5813–5821.
Build a Computationally Efficient Strong Defense Against Adversarial Example Attacks
365