Authors:
Thanh Nguyen
1
;
Zhi Chen
1
;
Kento Hasegawa
2
;
Kazuhide Fukushima
2
and
Razvan Beuran
1
Affiliations:
1
Japan Advanced Institute of Science and Technology, Japan
;
2
KDDI Research, Inc., Japan
Keyword(s):
Penetration Testing, Reinforcement Learning, Agent Training Environment, Cyber Range.
Abstract:
Penetration testing (pentesting) is an essential method for identifying and exploiting vulnerabilities in computer systems to improve their security. Recently, reinforcement learning (RL) has emerged as a promising approach for creating autonomous pentesting agents. However, the lack of realistic agent training environments has hindered the development of effective RL-based pentesting agents. To address this issue, we propose PenGym, a framework that provides real environments for training pentesting RL agents. PenGym makes available both network discovery and host-based exploitation actions to train, test, and validate RL agents in an emulated network environment. Our experiments demonstrate the feasibility of this approach, with the main advantage compared to typical simulation-based agent training being that PenGym is able to execute real pentesting actions in a real network environment, while providing a reasonable training time. Therefore, in PenGym there is no need to model act
ions using assumptions and probabilities, since actions are conducted in an actual network and their results are real too. Furthermore, our results show that RL agents trained with PenGym took fewer steps on average to reach the pentesting goal—7.72 steps in our experiments, compared to 11.95 steps for simulation-trained agents.
(More)