
Research Article
Wireless Parallel Reinforcement Learning: An Actor-Critic Approach
@INPROCEEDINGS{10.1007/978-3-031-65123-6_27, author={Ke Xing and Xinyue Ma and Yanjie Dong}, title={Wireless Parallel Reinforcement Learning: An Actor-Critic Approach}, proceedings={Quality, Reliability, Security and Robustness in Heterogeneous Systems. 19th EAI International Conference, QShine 2023, Shenzhen, China, October 8 -- 9, 2023, Proceedings, Part II}, proceedings_a={QSHINE PART 2}, year={2024}, month={8}, keywords={Actor-critic parallel reinforcement learning wireless reinforcement learning}, doi={10.1007/978-3-031-65123-6_27} }
- Ke Xing
Xinyue Ma
Yanjie Dong
Year: 2024
Wireless Parallel Reinforcement Learning: An Actor-Critic Approach
QSHINE PART 2
Springer
DOI: 10.1007/978-3-031-65123-6_27
Abstract
In this study, we introduced a novel wireless actor-critic method. Leveraging federated learning, wireless terminals could train models while ensuring data privacy, eliminating the requirement to upload raw data to a central server. The recently proposed parallel reinforcement learning framework allowed wireless terminals to maintain multiple instances of the same environment for parallel data generation. To overcome the challenges posed by the double near-far effect during model exchange, we exploit the superposition property of wireless channels. We conducted experiments on a practical environment to validate our approach and assessed its performance by adjusting threshold and power parameters. The experimental results demonstrated that our method could maintain stable signal transmission under specific noise conditions. The wireless actor-critic method presented a valuable solution for wireless machine learning model training, with potential applications in diverse domains. Future work would focus on further optimization, expansion, and practical validation in diverse real-world scenarios.