Hierarchically and Cooperatively Learning Traffic Signal Control
DOI:
https://doi.org/10.1609/aaai.v35i1.16147Keywords:
Transportation, Reinforcement Learning, Coordination and CollaborationAbstract
Deep reinforcement learning (RL) has been applied to traffic signal control recently and demonstrated superior performance to conventional control methods. However, there are still several challenges we have to address before fully applying deep RL to traffic signal control. Firstly, the objective of traffic signal control is to optimize average travel time, which is a delayed reward in a long time horizon in the context of RL. However, existing work simplifies the optimization by using queue length, waiting time, delay, etc., as immediate reward and presumes these short-term targets are always aligned with the objective. Nevertheless, these targets may deviate from the objective in different road networks with various traffic patterns. Secondly, it remains unsolved how to cooperatively control traffic signals to directly optimize average travel time. To address these challenges, we propose a hierarchical and cooperative reinforcement learning method-HiLight. HiLight enables each agent to learn a high-level policy that optimizes the objective locally by selecting among the sub-policies that respectively optimize short-term targets. Moreover, the high-level policy additionally considers the objective in the neighborhood with adaptive weighting to encourage agents to cooperate on the objective in the road network. Empirically, we demonstrate that HiLight outperforms state-of-the-art RL methods for traffic signal control in real road networks with real traffic.Downloads
Published
2021-05-18
How to Cite
Xu, B., Wang, Y., Wang, Z., Jia, H., & Lu, Z. (2021). Hierarchically and Cooperatively Learning Traffic Signal Control. Proceedings of the AAAI Conference on Artificial Intelligence, 35(1), 669-677. https://doi.org/10.1609/aaai.v35i1.16147
Issue
Section
AAAI Technical Track on Application Domains