
PGD with feature-specific epsilon calculations, we
evaluated the resilience of both Random Forest and
Neural Network classifiers. The results reveal that
even high-performing models on clean data are signif-
icantly susceptible to adversarial attacks, underscor-
ing a critical challenge for the deployment of ML-
based NIDS in real-world environments.
Key findings include:
• Vulnerability to Adversarial Attacks. Both
Random Forest and Neural Network models, de-
spite achieving high accuracy on unperturbed
data, showed notable performance degradation
when evaluated on adversarial samples. This vul-
nerability highlights a substantial risk in cyber-
security, where attackers can exploit these weak-
nesses to bypass detection systems with minimal
perturbations.
• Impact of Adversarial Perturbation Scale.
As perturbation levels (epsilon values) in-
creased, both models’ accuracy and AUC dropped
markedly, with the Neural Network showing
greater sensitivity to smaller perturbations. This
comparative analysis indicates that while model
performance may vary by architecture, neither
model proved resilient under adversarial condi-
tions, emphasizing the need for adversarial testing
and defense mechanisms.
• Limitations of Retraining Strategies. While re-
training on adversarial samples is a promising ap-
proach, it often introduces new feature dependen-
cies that could be exploited by adaptive attack-
ers. This suggests that while adversarial retraining
can improve robustness to some extent, it may not
provide comprehensive protection against evolv-
ing threats.
• Need for Continuous Adaptation and Evalua-
tion. Our study underscores the importance of on-
going evaluation and adaptation of ML models in
cybersecurity, as static models are insufficient in
the face of adaptive adversarial strategies. NIDS
models must incorporate dynamic and robust de-
fense techniques to maintain security in high-risk
environments.
In summary, while machine learning models are
essential for enhancing cybersecurity, their vulnera-
bility to adversarial attacks remains a significant chal-
lenge. Future work should explore more adaptive and
resilient approaches, including hybrid architectures,
continuous adversarial training, and interpretability
techniques, to bolster NIDS models against sophis-
ticated and evolving adversarial tactics. This study
serves as a call to action for the development of robust
and secure NIDS models that can withstand adversar-
ial manipulations while providing reliable protection
within enterprise and critical infrastructure networks.
ACKNOWLEDGEMENTS
This work is supported in part by the Center for Eq-
uitable Artificial intelligence and Machine Learning
Systems (CEAMLS) at Morgan State University. This
paper benefited from the use of OpenAI’s ChatGPT
for language enhancement, including grammar cor-
rections, rephrasing, and stylistic refinements. All
AI-assisted content was subsequently reviewed and
approved by the authors to ensure technical accuracy
and clarity.
REFERENCES
Akhtar, M. S. and Feng, T. (2022). Malware analysis and
detection using machine learning algorithms. Symme-
try, 14(11):2304.
Arivudainambi, D., KA, V. K., Visu, P., et al. (2019).
Malware traffic classification using principal compo-
nent analysis and artificial neural network for extreme
surveillance. Computer Communications, 147:50–57.
Arunkumar, M. and Kumar, K. A. (2023). Gosvm:
Gannet optimization based support vector machine
for malicious attack detection in cloud environ-
ment. International Journal of Information Technol-
ogy, 15(3):1653–1660.
Buczak, A. L. and Guven, E. (2016). A survey of data min-
ing and machine learning methods for cyber security
intrusion detection. IEEE Communications Surveys &
Tutorials, 18(2):1153–1176.
Carlini, N. and Wagner, D. (2017). Adversarial examples
are not easily detected: Bypassing ten detection meth-
ods.
Chen, P.-Y. and Hsieh, C.-J. (2023). Preface. In Chen, P.-Y.
and Hsieh, C.-J., editors, Adversarial Robustness for
Machine Learning, pages xiii–xiv. Academic Press.
Ghouti, L. and Imam, M. (2020). Malware classifica-
tion using compact image features and multiclass
support vector machines. IET Information Security,
14(4):419–429.
Goodfellow, I. J., Shlens, J., and Szegedy, C. (2015). Ex-
plaining and harnessing adversarial examples.
Gressel, G., Hegde, N., Sreekumar, A., Radhakrishnan, R.,
Harikumar, K., S., A., and Achuthan, K. (2023). Fea-
ture importance guided attack: A model agnostic ad-
versarial attack.
Khammas, B. M. (2020). Ransomware detection using ran-
dom forest technique. ICT Express, 6(4):325–331.
Kok, S., Abdullah, A., Jhanjhi, N., and Supramaniam, M.
(2019). Ransomware, threat and detection techniques:
A review. Int. J. Comput. Sci. Netw. Secur, 19(2):136.
Evaluating Network Intrusion Detection Models for Enterprise Security: Adversarial Vulnerability and Robustness Analysis
707