Public transport route optimization with reinforcement learning
High transportation demand due to a large population has resulted in traffic congestion problems in cities, which can be addressed through public transport. However, unbalanced passenger demands and traffic conditions can affect the performance of buses. The stop-skipping strategy effectively distri...
Saved in:
Main Author: | |
---|---|
Format: | Final Year Project / Dissertation / Thesis |
Published: |
2023
|
Subjects: | |
Online Access: | http://eprints.utar.edu.my/5721/1/ET_1804019_FYP_report_%2D_BEE_SIM_TAY.pdf http://eprints.utar.edu.my/5721/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
my-utar-eprints.5721 |
---|---|
record_format |
eprints |
spelling |
my-utar-eprints.57212023-07-07T14:38:22Z Public transport route optimization with reinforcement learning Tay, Bee Sim TK Electrical engineering. Electronics Nuclear engineering High transportation demand due to a large population has resulted in traffic congestion problems in cities, which can be addressed through public transport. However, unbalanced passenger demands and traffic conditions can affect the performance of buses. The stop-skipping strategy effectively distributes passenger demand while minimizing bus operating costs if the operator can adapt to changes in passenger demands and traffic conditions. Therefore, this project proposes a deep reinforcement learning-based public transport route optimization where the agent can acquire the optimal strategy by interacting with the dynamic bus environment. This project aims to maximize the passenger satisfaction levels while minimizing bus operator expenditures. Thus, the dynamic bus environment is designed based on a bus optimization scheme that comprises one express bus followed by one no-skip bus to serve stranded passengers due to skipped actions. The reward function is formulated as a function of passenger demand, in-vehicle time, bus running time and passenger waiting time. which is used to train the double deep Q-network (DDQN) agent. Simulation results show that the agent can intelligently skip stations and outperform the conventional method under different passenger distribution patterns. The DDQN approach yields the best performance in the static passenger demand scenario, followed by the scenario with dynamic passenger demands according to time, and lastly the randomly distributed passenger demand scenario. Future studies should consider the load constraints of buses and other factors, such as bus utilization rate, to improve the performance of stop-skipping services for passengers and operators. Real-life passenger data could be incorporated into the DRL model using Internet of Things technology (IoT) for route optimization. 2023 Final Year Project / Dissertation / Thesis NonPeerReviewed application/pdf http://eprints.utar.edu.my/5721/1/ET_1804019_FYP_report_%2D_BEE_SIM_TAY.pdf Tay, Bee Sim (2023) Public transport route optimization with reinforcement learning. Final Year Project, UTAR. http://eprints.utar.edu.my/5721/ |
institution |
Universiti Tunku Abdul Rahman |
building |
UTAR Library |
collection |
Institutional Repository |
continent |
Asia |
country |
Malaysia |
content_provider |
Universiti Tunku Abdul Rahman |
content_source |
UTAR Institutional Repository |
url_provider |
http://eprints.utar.edu.my |
topic |
TK Electrical engineering. Electronics Nuclear engineering |
spellingShingle |
TK Electrical engineering. Electronics Nuclear engineering Tay, Bee Sim Public transport route optimization with reinforcement learning |
description |
High transportation demand due to a large population has resulted in traffic congestion problems in cities, which can be addressed through public transport. However, unbalanced passenger demands and traffic conditions can affect the performance of buses. The stop-skipping strategy effectively distributes passenger demand while minimizing bus operating costs if the operator can adapt to changes in passenger demands and traffic conditions. Therefore, this project proposes a deep reinforcement learning-based public transport route optimization where the agent can acquire the optimal strategy by interacting with the dynamic bus environment. This project aims to maximize the passenger satisfaction levels while minimizing bus operator expenditures. Thus, the dynamic bus environment is designed based on a bus optimization scheme that comprises one express bus followed by one no-skip bus to serve stranded passengers due to skipped actions. The reward function is formulated as a function of passenger demand, in-vehicle time, bus running time and passenger waiting time. which is used to train the double deep Q-network (DDQN) agent. Simulation results show that the agent can intelligently skip stations and outperform the conventional method under different passenger distribution patterns. The DDQN approach yields the best performance in the static passenger demand scenario, followed by the scenario with dynamic passenger demands according to time, and lastly the randomly distributed passenger demand scenario. Future studies should consider the load constraints of buses and other factors, such as bus utilization rate, to improve the performance of stop-skipping services for passengers and operators. Real-life passenger data could be incorporated into the DRL model using Internet of Things technology (IoT) for route optimization. |
format |
Final Year Project / Dissertation / Thesis |
author |
Tay, Bee Sim |
author_facet |
Tay, Bee Sim |
author_sort |
Tay, Bee Sim |
title |
Public transport route optimization with reinforcement learning |
title_short |
Public transport route optimization with reinforcement learning |
title_full |
Public transport route optimization with reinforcement learning |
title_fullStr |
Public transport route optimization with reinforcement learning |
title_full_unstemmed |
Public transport route optimization with reinforcement learning |
title_sort |
public transport route optimization with reinforcement learning |
publishDate |
2023 |
url |
http://eprints.utar.edu.my/5721/1/ET_1804019_FYP_report_%2D_BEE_SIM_TAY.pdf http://eprints.utar.edu.my/5721/ |
_version_ |
1772816676679581696 |
score |
13.211869 |