About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Mobile and Ubiquitous Systems: Computing, Networking and Services. 20th EAI International Conference, MobiQuitous 2023, Melbourne, VIC, Australia, November 14–17, 2023, Proceedings, Part I

Research Article

RADEAN: A Resource Allocation Model Based on Deep Reinforcement Learning and Generative Adversarial Networks in Edge Computing

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-63989-0_13,
        author={Zhaoyang Yu and Sinong Zhao and Tongtong Su and Wenwen Liu and Xiaoguang Liu and Gang Wang and Zehua Wang and Victor C. M. Leung},
        title={RADEAN: A Resource Allocation Model Based on Deep Reinforcement Learning and Generative Adversarial Networks in Edge Computing},
        proceedings={Mobile and Ubiquitous Systems: Computing, Networking and Services. 20th EAI International Conference, MobiQuitous 2023, Melbourne, VIC, Australia, November 14--17, 2023, Proceedings, Part I},
        proceedings_a={MOBIQUITOUS},
        year={2024},
        month={7},
        keywords={edge computing resource allocation deep reinforcement learning generative adversarial networks},
        doi={10.1007/978-3-031-63989-0_13}
    }
    
  • Zhaoyang Yu
    Sinong Zhao
    Tongtong Su
    Wenwen Liu
    Xiaoguang Liu
    Gang Wang
    Zehua Wang
    Victor C. M. Leung
    Year: 2024
    RADEAN: A Resource Allocation Model Based on Deep Reinforcement Learning and Generative Adversarial Networks in Edge Computing
    MOBIQUITOUS
    Springer
    DOI: 10.1007/978-3-031-63989-0_13
Zhaoyang Yu1, Sinong Zhao1, Tongtong Su1, Wenwen Liu1, Xiaoguang Liu1,*, Gang Wang1, Zehua Wang2, Victor C. M. Leung2
  • 1: College of Computer Science, ICIC, TMCC, SysNet, DISSec, GTIISC
  • 2: Department of Electrical and Computer Engineering, WiNMos Lab
*Contact email: liuxg@nbjl.nankai.edu.cn

Abstract

Edge computing alleviates the network congestion and latency pressure on the remote cloud as well as the computation stress on end devices. However, facing numerous tasks, how to effectively allocate and manage the resources of edge servers is of great significance, which affects both the benefits of service providers and Quality of Service (QoS) of users. This paper proposes aresourceallocation model based onDeep Reinforcement Learning (DRL) and GenerativeAdversarialNetworks (G AN)(RADEAN). It considers the future resource occupancy of edge servers, and applies multi-replay memory with priority to eliminate the interaction between experiences and improve sampling efficiency. We maximize the average resource utilization of edge servers while ensuring the average transmission latency (ATL) and average execution time (AET) of tasks in a long-term view. Specifically, based on the state which consists the predicted resource occupation output by GAN, the current resource usage status of edge servers and the characteristics of the task queue, DRL agent makes resource allocation decision for each task. We conduct experiments using real-world data trace, and show that RADEAN outperforms traditional and state-of-the-art models with great generalization, reaching the maximum performance improvement of 48.21% compared with MMRA. The ATL and AET of tasks are also presented to reflect the QoS guarantee. Ablation experiments prove the effectiveness of multi-replay memory and priority sub-modules.

Keywords
edge computing resource allocation deep reinforcement learning generative adversarial networks
Published
2024-07-19
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-63989-0_13
Copyright © 2023–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL