Authors:
Junping Xiang
1
and
Zonghai Chen
2
Affiliations:
1
University of Science and Technology of China, Lianyungang JARI Electronics Co. and Ltd. of CSIC, China
;
2
University of Science and Technology of China, China
Keyword(s):
Grey Qualitative, Reinforcement Learning, Bottleneck Subzone Control, BP Neural Networks.
Related
Ontology
Subjects/Areas/Topics:
Applications
;
Learning and Adaptive Control
;
Learning in Process Automation
;
Pattern Recognition
;
Software Engineering
Abstract:
A Grey Qualitative Reinforment Learning algorithm is present in this paper to realize the adaptive signal
control of bottleneck subzone, which is described as a nonlinear optimization problem. In order to handle
the uncertainites in the traffic flow system, grey theory model and qualitative method were used to express
the sensor data. In order to avoid deducing the function relationship of the traffic flow and the timing plan,
grey reinforcement learning algorithm, which is the biggest innovation in this paper, was proposed to seek
the solution. In order to enhance the generalization capability of the system and avoid the "curse of
dimensionality" and improve the convergence speed, BP neural network was used to approximate the Q-function.
We do three simulation experiments (calibrated with real data) using four evaluation indicators for
contrast and analyze. Simulation results show that the proposed method can significantly improve the traffic
situation of bottleneck subzone, and the
algorithm has good robustness and low noise sensitivity.
(More)