Multi-agent deep Q-network for traffic signal control under rainfall: a case study on Sunway city, Malaysia
In most urban areas, traffic congestion is a vexing, complex and growing issue day by day. Reinforcement learning (RL) enables a single decision maker (or an agent) to learn and make optimal actions in an independent manner, while multi-agent reinforcement learning (MARL) enables multiple agents to...
Saved in:
Main Author: | |
---|---|
Format: | Thesis |
Published: |
2021
|
Subjects: | |
Online Access: | http://eprints.sunway.edu.my/2400/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In most urban areas, traffic congestion is a vexing, complex and growing issue day by day. Reinforcement learning (RL) enables a single decision maker (or an agent) to learn and make optimal actions in an independent manner, while multi-agent reinforcement learning (MARL) enables multiple agents to exchange knowledge, learn, and make optimal joint actions in a collaborative manner. The integration of the newly emerging deep learning and the traditional RL approach has created an advanced technique called deep Q-network (DQN) that has shown promising results in solving high-dimensional and complex problems, including traffic congestion. In this research, DQN is embedded in traffic signal control to solve traffic congestion issue, which has been plagued with the curse of dimensionality, whereby the representation of the operating environment can be highly dimensional and complex when the traditional RL approach is used. Most importantly, this research proposes multi-agent DQN (MADQN) and investigates its use to further address the curse of dimensionality under traffic network scenarios with high traffic volume and disturbances. To investigate the effectiveness of our proposed scheme, a case study based on an urban area, namely Sunway City in Malaysia, is conducted. We evaluate our scheme via simulation using a traffic network simulator called simulation of urban mobility (SUMO) and a simulation tool called MATLAB. Simulation results show that our proposed scheme increases the throughput by [59, 97] and [54, 96] vehicles for recurring and non-recurring traffic congestions, respectively, as well as reduces the queue length by [2, 9] and [2, 10] vehicles for recurring and non-recurring traffic congestions, respectively, and the waiting time by [0, 9] seconds for both types of traffic congestions. |
---|