Authors:
Manish Bhattarai
1
;
2
and
Manel Martínez-Ramón
2
Affiliations:
1
Los Alamos National Laboratory, Los Alamos, NM, 87544, U.S.A.
;
2
University of New Mexico, Albuquerque, NM, 87106, New Mexico, U.S.A.
Keyword(s):
Path Planning, Navigation, Firefighting, Decision Making, Reinforcement Learning, Deep Q-learning, Situational Awareness.
Abstract:
Live fire creates a dynamic, rapidly changing environment that presents a worthy challenge for deep learning and artificial intelligence methodologies to assist firefighters with scene comprehension in maintaining their situational awareness, tracking and relay of important features necessary for key decisions as they tackle these catastrophic events. We propose a deep Q-learning based agent who is immune to stress induced disorientation and anxiety and thus able to make clear decisions for firefighter navigation based on the observed and stored facts in live fire environments. As a proof of concept, we imitate structural fire in a gaming engine called Unreal Engine which enables the interaction of the agent with the environment. The agent is trained with a deep Q-learning algorithm based on a set of rewards and penalties as per its actions on the environment. We exploit experience replay to accelerate the learning process and augment the learning of the agent with human-derived expe
riences. The agent trained under this deep Q-learning approach outperforms agents trained through alternative path planning systems and demonstrates this methodology as a promising foundation on which to build a path planning navigation assistant. This assistant is capable of safely guiding firefighters through live-fire environments in fireground navigation activities that range from exploration to personnel rescue.
(More)