Research Article
Reinforcement Learning-Based Radio Access Network Slicing for a 5G System with Support for Cellular V2X
@INPROCEEDINGS{10.1007/978-3-030-25748-4_20, author={Haider Albonda and J. P\^{e}rez-Romero}, title={Reinforcement Learning-Based Radio Access Network Slicing for a 5G System with Support for Cellular V2X}, proceedings={Cognitive Radio-Oriented Wireless Networks. 14th EAI International Conference, CrownCom 2019, Poznan, Poland, June 11--12, 2019, Proceedings}, proceedings_a={CROWNCOM}, year={2019}, month={8}, keywords={Vehicle-to-everything (V2X) Network slicing Reinforcement learning}, doi={10.1007/978-3-030-25748-4_20} }
- Haider Albonda
J. Pérez-Romero
Year: 2019
Reinforcement Learning-Based Radio Access Network Slicing for a 5G System with Support for Cellular V2X
CROWNCOM
Springer
DOI: 10.1007/978-3-030-25748-4_20
Abstract
5G mobile systems are expected to host a variety of services and applications such as enhanced mobile broadband (eMBB), massive machine-type communications (mMTC), and ultra-reliable low-latency communications (URLLC). Therefore, the major challenge in designing the 5G networks is how to support different types of users and applications with different quality-of-service requirements under a single physical network infrastructure. Recently, Radio Access Network (RAN) slicing has been introduced as a promising solution to address these challenges. In this direction, our paper investigates the RAN slicing problem when providing two generic services of 5G, namely eMBB and Cellular Vehicle-to-everything (V2X). We propose an efficient RAN slicing scheme based on offline reinforcement learning that allocates radio resources to different slices while accounting for their utility requirements and the dynamic changes in the traffic load in order to maximize efficiency of the resource utilization. A simulation-based analysis is presented to assess the performance of the proposed solution.