ACKNOWLEDGEMENTS
The authors would like to acknowledge funding from
the New Zealand Ministry of Business, Innovation
and Employment (MBIE) for project UOWX1911,
Artificial Intelligence for Human-Centric Security.
REFERENCES
Barreno, M., Nelson, B., Joseph, A. D., and Tygar, J. D.
(2010). The security of machine learning. Machine
Learning, 81:121–148.
Barreno, M., Nelson, B., Sears, R., Joseph, A. D., and Ty-
gar, J. D. (2006). Can machine learning be secure? In
Proceedings of the 2006 ACM Symposium on Informa-
tion, computer and communications security, pages
16–25.
Benmalek, M., Benrekia, M. A., and Challal, Y. (2022).
Security of federated learning: Attacks, defensive
mechanisms, and challenges. Revue des Sciences
et Technologies de l’Information-S
´
erie RIA: Revue
d’Intelligence Artificielle, 36(1):49–59.
Caroline, B., Christian, B., Stephan, B., Luis, B., Giuseppe,
D., Damiani, E., Sven, H., Caroline, L., Jochen, M.,
Nguyen, D. C., et al. (2021). Securing machine learn-
ing algorithms.
Chakraborty, A., Alam, M., Dey, V., Chattopadhyay, A.,
and Mukhopadhyay, D. (2021). A survey on adversar-
ial attacks and defences. CAAI Transactions on Intel-
ligence Technology, 6(1):25–45.
Dong, G. and Liu, H. (2018). Feature engineering for ma-
chine learning and data analytics. CRC press.
Feurer, M. and Hutter, F. (2019). Hyperparameter optimiza-
tion. Automated machine learning: Methods, systems,
challenges, pages 3–33.
Greshake, K., Abdelnabi, S., Mishra, S., Endres, C., Holz,
T., and Fritz, M. (2023). More than you’ve asked for:
A comprehensive analysis of novel prompt injection
threats to application-integrated large language mod-
els. arXiv preprint arXiv:2302.12173.
Grimmeisen, B., Chegini, M., and Theissler, A. (2022). Vis-
gil: machine learning-based visual guidance for inter-
active labeling. The Visual Computer, pages 1–23.
Hu, H., Salcic, Z., Sun, L., Dobbie, G., Yu, P. S., and Zhang,
X. (2022). Membership inference attacks on machine
learning: A survey. ACM Computing Surveys (CSUR),
54(11s):1–37.
Kreuzberger, D., K
¨
uhl, N., and Hirschl, S. (2023). Ma-
chine learning operations (mlops): Overview, defini-
tion, and architecture. IEEE Access.
Liu, P., Xu, X., and Wang, W. (2022). Threats, attacks and
defenses to federated learning: issues, taxonomy and
perspectives. Cybersecurity, 5(1):1–19.
Marshall, A., Parikh, J., Kiciman, E., and Kumar, R. S. S.
Ai/ml pivots to the security development lifecycle bug
bar. https://learn.microsoft.com/en-us/security/engin
eering/bug-bar-aiml. Accessed: 2023-10-26.
Martins, G., Bhatia, S., Koutsoukos, X., Stouffer, K., Tang,
C., and Candell, R. (2015). Towards a systematic
threat modeling approach for cyber-physical systems.
In 2015 Resilience Week (RWS), pages 1–6. IEEE.
Mauri, L. and Damiani, E. (2022). Modeling threats to ai-
ml systems using stride. Sensors, 22(17).
Mirsky, Y., Demontis, A., Kotak, J., Shankar, R., Gelei, D.,
Yang, L., Zhang, X., Pintor, M., Lee, W., Elovici, Y.,
and Biggio, B. (2023). The threat of offensive ai to
organizations. Computers & Security, 124:103006.
MITRE. Mitre atlas, adversarial threat landscape for
artificial-intelligence systems. https://atlas.mitre.org.
Accessed: 2023-10-26.
Moon, S., An, G., and Song, H. O. (2022). Preemptive im-
age robustification for protecting users against man-
in-the-middle adversarial attacks. In Proceedings of
the AAAI Conference on Artificial Intelligence, vol-
ume 36, pages 7823–7830.
Oseni, A., Moustafa, N., Janicke, H., Liu, P., Tari, Z., and
Vasilakos, A. (2021). Security and privacy for artifi-
cial intelligence: Opportunities and challenges. arXiv
preprint arXiv:2102.04661.
Roh, Y., Heo, G., and Whang, S. E. (2019). A survey on
data collection for machine learning: a big data-ai in-
tegration perspective. IEEE Transactions on Knowl-
edge and Data Engineering, 33(4):1328–1347.
Selin, J. (2019). Evaluation of threat modeling methodolo-
gies.
Singh, S., Bhure, S., and van der Veer, R. (2023). Owasp
machine learning security top 10 - draft release v0.3.
Symeonidis, G., Nerantzis, E., Kazakis, A., and Papakostas,
G. A. (2022). Mlops-definitions, tools and challenges.
In 2022 IEEE 12th Annual Computing and Commu-
nication Workshop and Conference (CCWC), pages
0453–0460. IEEE.
ur Rehman, T., Khan, M. N. A., and Riaz, N.
(2013). Analysis of requirement engineering pro-
cesses, tools/techniques and methodologies. Interna-
tional Journal of Information Technology and Com-
puter Science (IJITCS), 5(3):40.
Wang, D., Li, C., Wen, S., Nepal, S., and Xiang, Y. (2020).
Man-in-the-middle attacks against machine learning
classifiers via malicious generative models. IEEE
Transactions on Dependable and Secure Computing,
18(5):2074–2087.
Wang, Z., Ma, J., Wang, X., Hu, J., Qin, Z., and Ren, K.
(2022). Threats to training: A survey of poisoning at-
tacks and defenses on machine learning systems. ACM
Computing Surveys, 55(7):1–36.
Worzyk, N., Kahlen, H., and Kramer, O. (2019). Phys-
ical adversarial attacks by projecting perturbations.
In 28th International Conference on Artificial Neu-
ral Networks, Munich, Germany, pages 649–659.
Springer.
Yang, L. and Shami, A. (2020). On hyperparameter opti-
mization of machine learning algorithms: Theory and
practice. Neurocomputing, 415:295–316.
ICISSP 2024 - 10th International Conference on Information Systems Security and Privacy
178