Machine Learning and Intelligent Communications. 4th International Conference, MLICOM 2019, Nanjing, China, August 24–25, 2019, Proceedings

Research Article

Backscatter-Aided Hybrid Data Offloading for Mobile Edge Computing via Deep Reinforcement Learning

Download
120 downloads
  • @INPROCEEDINGS{10.1007/978-3-030-32388-2_45,
        author={Yutong Xie and Zhengzhuo Xu and Jing Xu and Shimin Gong and Yi Wang},
        title={Backscatter-Aided Hybrid Data Offloading for Mobile Edge Computing via Deep Reinforcement Learning},
        proceedings={Machine Learning and Intelligent Communications. 4th International Conference, MLICOM 2019, Nanjing, China, August 24--25, 2019, Proceedings},
        proceedings_a={MLICOM},
        year={2019},
        month={10},
        keywords={Deep reinforcement learning Double DQN Computation offloading Backscatter communications},
        doi={10.1007/978-3-030-32388-2_45}
    }
    
  • Yutong Xie
    Zhengzhuo Xu
    Jing Xu
    Shimin Gong
    Yi Wang
    Year: 2019
    Backscatter-Aided Hybrid Data Offloading for Mobile Edge Computing via Deep Reinforcement Learning
    MLICOM
    Springer
    DOI: 10.1007/978-3-030-32388-2_45
Yutong Xie,*, Zhengzhuo Xu1,*, Jing Xu1,*, Shimin Gong2,*, Yi Wang,*
  • 1: Huazhong University of Science and Technology
  • 2: Sun Yat-sen University
*Contact email: evnxie@foxmail.com, 1157567638@qq.com, xujing@hust.edu.cn, gong0012@e.ntu.edu.sg, wangy37@sustc.edu.cn

Abstract

Data offloading in mobile edge computing (MEC) allows the low power IoT devices in the edge to optionally offload power-consuming computation tasks to MEC servers. In this paper, we consider a novel backscatter-aided hybrid data offloading scheme to further reduce the power consumption in data transmission. In particular, each device has a dual-mode radio that can offload data via either the conventional active RF communications or the passive backscatter communications with extreme low power consumption. The flexibility in the radio mode switching makes it more complicated to design the optimal offloading strategy, especially in a dynamic network with time-varying workload and energy supply at each device. Hence, we propose the deep reinforcement learning (DRL) framework to handle huge state space under uncertain network state information. By a simple quantization scheme, we design the learning policy in the Double Deep Q-Network (DDQN) framework, which is shown to have better stability and convergence properties. The numerical results demonstrate that the proposed DRL approach can learn and converge to the maximal energy efficiency compared with other baseline approaches.