The revolutionary paradigm of the 5 G network slicing introduces promising market possibilities through multi-tenancy support. Customized slices might be provided to other tenants at a different price as an emerging company to operators. Network slicing is difficult to deliver higher performance and cost-effective facilities through render resources utilisation in alignment with customer activity. Therefore, this paper, Deep Reinforcement Learning-based Traffic Scheduling Model (DRLTSM), has been proposed to interact with the environment by searching for new alternative actions and reinforcement patterns believed to encourage outcomes. The DRL for network slicing situations addresses power control and core network slicing and priority-based sizing involves radio resource. This paper aims to develop three main network slicing blocks i) traffic analysis and network slice forecasting, (ii) network slice admission management decisions, and (iii) adaptive load prediction corrections based on calculated deviations; Our findings suggest very significant possible improvements show that DRLTSM is dramatically improving its efficiency rate to 97.32%, scalability and compatibility in comparison with its baseline.
|Number of pages||13|
|Journal||Computers and Electrical Engineering|
|Early online date||12 Apr 2022|
|Publication status||Published - May 2022|
Bibliographical noteFunding Information:
This research is supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R195), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.
Copyright © 2022. This manuscript version is made available under the CC-BY-NC-ND 4.0 license https://creativecommons.org/licenses/by-nc-nd/4.0/
- Deep reinforcement learning
- Network slicing
- Traffic scheduling