
Research Article
On the Analysis of Computational Delays in Reinforcement Learning-Based Rate Adaptation Algorithms
@INPROCEEDINGS{10.1007/978-3-031-57523-5_23, author={Ricardo Trancoso and Jo\"{a}o Pinto and Ruben Queiros and Helder Fontes and Rui Campos}, title={On the Analysis of Computational Delays in Reinforcement Learning-Based Rate Adaptation Algorithms}, proceedings={Simulation Tools and Techniques. 15th EAI International Conference, SIMUtools 2023, Seville, Spain, December 14-15, 2023, Proceedings}, proceedings_a={SIMUTOOLS}, year={2024}, month={4}, keywords={Reinforcement Learning Rate Adaptation Computational Delay}, doi={10.1007/978-3-031-57523-5_23} }
- Ricardo Trancoso
João Pinto
Ruben Queiros
Helder Fontes
Rui Campos
Year: 2024
On the Analysis of Computational Delays in Reinforcement Learning-Based Rate Adaptation Algorithms
SIMUTOOLS
Springer
DOI: 10.1007/978-3-031-57523-5_23
Abstract
Several research works have applied Reinforcement Learning (RL) algorithms to solve the Rate Adaptation (RA) problem in Wi-Fi networks. The dynamic nature of the radio link requires the algorithms to be responsive to changes in link quality. Delays in the execution of the algorithm due to implementional details may be detrimental to its performance, which in turn may decrease network performance. These delays can be avoided to a certain extent. However, this aspect has been overlooked in the state of the art when using simulated environments, since the computational delays are not considered. In this paper, we present an analysis of computational delays and their impact on the performance of RL-based RA algorithms, and propose a methodology to incorporate the experimental computational delays of the algorithms from running in a specific target hardware, in a simulation environment. Our simulation results considering the real computational delays showed that these delays do, in fact, degrade the algorithm’s execution and training capabilities which, in the end, has a negative impact on network performance.