Deep reinforcement learning based resource allocation strategy in cloud-edge computing system
A lot of real time processing as well as resourceintensive apps is what is needed more and thus, cloud-edge computing systems require compelling resource allocation schemes. This research focuses on the utility of Multiagent Learning framework with Deep Reinforcement Learning (MAL-DRL) which is...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | http://eprints.uthm.edu.my/11924/1/P17003_0e3c560e211e3d06995d79b44427688c.pdf http://eprints.uthm.edu.my/11924/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | A lot of real time processing as well as resourceintensive apps is what is needed more and thus, cloud-edge
computing systems require compelling resource allocation
schemes. This research focuses on the utility of Multiagent
Learning framework with Deep Reinforcement Learning
(MAL-DRL) which is used for solution deployment concerning
resource allocation in such systems, such that the end user
enjoys optimization while operators optimize resource
utilization. In this work, the research focus on the simulation
testing of the MAL-DRL algorithm against classical Random
Allocation (RA) and singe agent DRL methods. This exhibition
shows that MAL-DRL improves the average latency more (44%
savings) and at the same time more resources are used (35%
increase) compared both the alternatives with a combined
reward measure score of 0.80. These results show that
distributed decision-making and learning styles of MAL-DRL
brings together faster resource allocation which can be
interpreted as a reduction of users’ delay experience and
therefore lead to better performance of whole system. Although
the article limits simulations in areas to be covered and
complexity in training, it brings the prospective benefits of
MAL-DRL in the management of cloud-edge resources on the
spot. In the course of this undertake, the effect of ensuring
communication goals, moving learning strategies, and security
measures for the ultimate goal of boosting this method's
applicability in real-world antics can be explored. |
---|