The Evolution of Criticality in Deep Reinforcement Learning
Chidvilas Karpenahalli Ramakrishna, Adithya Mohan, Zahra Zeinaly, Lenz Belzner
2025
Abstract
In Reinforcement Learning (RL), certain states demand special attention due to their significant influence on outcomes; these are identified as critical states. The concept of criticality is essential for the development of effective and robust policies and to improve overall trust in RL agents in real-world applications like autonomous driving. The current paper takes a deep dive into criticality and studies the evolution of criticality throughout training. The experiments are conducted on a new, simple yet intuitive continuous cliff maze environment and the Highway-env autonomous driving environment. Here, a novel finding is reported that criticality is not only learnt by the agent but can also be unlearned. We hypothesize that diversity in experiences is necessary for effective criticality quantification which is majorly driven by the chosen exploration strategy. This close relationship between exploration and criticality is studied utilizing two different strategies namely the exponential ε-decay and the adaptive ε-decay. The study supports the idea that effective exploration plays a crucial role in accurately identifying and understanding critical states.
DownloadPaper Citation
in Harvard Style
Karpenahalli Ramakrishna C., Mohan A., Zeinaly Z. and Belzner L. (2025). The Evolution of Criticality in Deep Reinforcement Learning. In Proceedings of the 17th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART; ISBN 978-989-758-737-5, SciTePress, pages 217-224. DOI: 10.5220/0013114200003890
in Bibtex Style
@conference{icaart25,
author={Chidvilas Karpenahalli Ramakrishna and Adithya Mohan and Zahra Zeinaly and Lenz Belzner},
title={The Evolution of Criticality in Deep Reinforcement Learning},
booktitle={Proceedings of the 17th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART},
year={2025},
pages={217-224},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0013114200003890},
isbn={978-989-758-737-5},
}
in EndNote Style
TY - CONF
JO - Proceedings of the 17th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART
TI - The Evolution of Criticality in Deep Reinforcement Learning
SN - 978-989-758-737-5
AU - Karpenahalli Ramakrishna C.
AU - Mohan A.
AU - Zeinaly Z.
AU - Belzner L.
PY - 2025
SP - 217
EP - 224
DO - 10.5220/0013114200003890
PB - SciTePress