
Research Article
Deep Reinforcement Learning for Multi-UAV Exploration Under Energy Constraints
@INPROCEEDINGS{10.1007/978-3-031-24386-8_20, author={Yating Zhou and Dianxi Shi and Huanhuan Yang and Haomeng Hu and Shaowu Yang and Yongjun Zhang}, title={Deep Reinforcement Learning for Multi-UAV Exploration Under Energy Constraints}, proceedings={Collaborative Computing: Networking, Applications and Worksharing. 18th EAI International Conference, CollaborateCom 2022, Hangzhou, China, October 15-16, 2022, Proceedings, Part II}, proceedings_a={COLLABORATECOM PART 2}, year={2023}, month={1}, keywords={Multi-UAV exploration Deep reinforcement learning Energy constraints}, doi={10.1007/978-3-031-24386-8_20} }
- Yating Zhou
Dianxi Shi
Huanhuan Yang
Haomeng Hu
Shaowu Yang
Yongjun Zhang
Year: 2023
Deep Reinforcement Learning for Multi-UAV Exploration Under Energy Constraints
COLLABORATECOM PART 2
Springer
DOI: 10.1007/978-3-031-24386-8_20
Abstract
Autonomous exploration is the essential task for various applications of unmanned aerial vehicles (UAVs), but there is currently a lack of available energy-constrained multi-UAV exploration methods. In this paper, we propose the RTN-Explorer, an environment exploration strategy that satisfies the energy constraints. The goal of environment exploration is to expand the scope of exploration as much as possible, while the goal of energy constraints is to make the UAV return to the landing zone before the energy is exhausted, so they are a pair of contradictory goals. To better balance these two goals, we use map centering, and local-global map processing methods to improve the system performance and use the minimum distance penalty function to make the multi-UAV system satisfy the energy constraints. We also use the map generator to generate different environment maps to improve generalization performance. A large number of simulation experiments verify the effectiveness and robustness of our method and show superior performance in benchmark comparison.