About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
IoT as a Service. 8th EAI International Conference, IoTaaS 2022, Virtual Event, November 17-18, 2022, Proceedings

Research Article

Cooperative Hybrid-Caching for Long-Tail Distribution Request with Deep Reinforcement Learning

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-37139-4_14,
        author={Weibao He and Fasheng Zhou and Dong Tang},
        title={Cooperative Hybrid-Caching for Long-Tail Distribution Request with Deep Reinforcement Learning},
        proceedings={IoT as a Service. 8th EAI International Conference, IoTaaS 2022, Virtual Event, November 17-18, 2022, Proceedings},
        proceedings_a={IOTAAS},
        year={2023},
        month={7},
        keywords={Edge caching Deep reinforcement learning Deep long-tail learning},
        doi={10.1007/978-3-031-37139-4_14}
    }
    
  • Weibao He
    Fasheng Zhou
    Dong Tang
    Year: 2023
    Cooperative Hybrid-Caching for Long-Tail Distribution Request with Deep Reinforcement Learning
    IOTAAS
    Springer
    DOI: 10.1007/978-3-031-37139-4_14
Weibao He1,*, Fasheng Zhou2, Dong Tang2
  • 1: School of Physics and Materials Science, Guangzhou University
  • 2: School of Electronics and Communication Engineering, Guangzhou University
*Contact email: hwb@e.gzhu.edu.cn

Abstract

Wireless caching is regarded for alleviating network congestion in next-generation communications. In this work, we focus on the impact of input data non-uniformity on neural network training when using deep learning to solve the wireless cache strategy. In particular, in wireless caching, the assumed user request is generally a classical long-tailed distribution: Zipf law. We address this problem from cache models and deep reinforcement learning models. On the one hand, base station will prefetch partial most popular contents on the user side to reduce the partiality of caching strategies. On the other hand, the trick of deep longtail learning is added to prevent the neural network from been over-fitting caused by inputs are concentrated in the most popular files. The performance for different reinforcement learning methods is analyzed, which show that our method can achieve better performance and latency than the existing content caching.

Keywords
Edge caching Deep reinforcement learning Deep long-tail learning
Published
2023-07-19
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-37139-4_14
Copyright © 2022–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL