Sum throughput maximization scheme for NOMA-enabled D2D groups using deep reinforcement learning in 5G and beyond networks.

Device-to-device (D2D) communication underlaying cellular network is a capable system for advancing the spectrum's efficiency. However, in this condition, D2D generates cross-channel and co-channel interference for cellular and other D2D users, which creates an excessive technical challenge for...

Full description

Saved in:
Bibliographic Details
Main Authors: Alam Khan, Mohammad Aftab, Mad Kaidi, Hazilah, Ahmad, Norulhusna, Rehman, Masood Ur
Format: Article
Published: Institute of Electrical and Electronics Engineers Inc. 2023
Subjects:
Online Access:http://eprints.utm.my/104947/
http://dx.doi.org/10.1109/JSEN.2023.3276799
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Device-to-device (D2D) communication underlaying cellular network is a capable system for advancing the spectrum's efficiency. However, in this condition, D2D generates cross-channel and co-channel interference for cellular and other D2D users, which creates an excessive technical challenge for allocating the spectrum. Despite this, massive connectivity is another issue in the 5G and beyond networks that need to be addressed. To overcome this problem, nonorthogonal multiple access (NOMA) is integrated with the D2D groups (DGs). In this article, our target is to maximize the sum throughput of the overall network while maintaining the signal-to-interference noise ratio (SINR) of the cellular and D2D users. To achieve the target, a discriminated spectrum distribution framework dependent on multiagent deep reinforcement learning (MADRL), termed a deep deterministic policy gradient (DDPG), is proposed. Here, it shares the global historical states, actions, and policies using the duration of central training. Furthermore, the proximal online policy scheme (POPS) is used to decrease the computation complexity of training. It used the clipping substitute technique for the modification and reduction of complexity at the training stage. The simulation results demonstrated that the proposed scheme POPS attains 16.67%, 24.98%, and 59.09% higher performance than the DDPG, deep dueling, and deep Q-network (DQN), respectively.