inis 22(31): 5

Research Article

DTWN: Q-learning-based Transmit Power Control for Digital Twin WiFi Networks

Download14 downloads
  • @ARTICLE{10.4108/eetinis.v9i31.1059,
        author={Lal Verda \`{E}akır and Khayal Huseynov and Elif Ak and Berk Canberk},
        title={DTWN: Q-learning-based Transmit Power Control for Digital Twin WiFi Networks},
        journal={EAI Endorsed Transactions on Industrial Networks and Intelligent Systems},
        volume={9},
        number={31},
        publisher={EAI},
        journal_a={INIS},
        year={2022},
        month={6},
        keywords={Digital Twin, Reinforcement Learning, Transmit Power Control, WiFi, Interference},
        doi={10.4108/eetinis.v9i31.1059}
    }
    
  • Lal Verda Çakır
    Khayal Huseynov
    Elif Ak
    Berk Canberk
    Year: 2022
    DTWN: Q-learning-based Transmit Power Control for Digital Twin WiFi Networks
    INIS
    EAI
    DOI: 10.4108/eetinis.v9i31.1059
Lal Verda Çakır1,*, Khayal Huseynov1, Elif Ak1, Berk Canberk1
  • 1: Istanbul Technical University
*Contact email: cakirl18@itu.edu.tr

Abstract

Interference has always been the main threat to the performance of traditional WiFi networks and next-generation moving forward. The problem can be solved with transmit power control(TPC). However, to accomplish this, an information-gathering process is required. But this brings overhead concerns that decrease the throughput. Moreover, mitigation of interference relies on the selection of transmit powers. In other words, the control scheme should select the optimum configuration relative to other possibilities based on the total interference, and this requires an extensive search. Furthermore, bidirectional communication in real-time needs to exist to control the transmit powers based on the current situation. Based on these challenges, we propose a complete solution with Digital Twin WiFi Networks (DTWN). Contrarily to other studies, with the agent programs installed on the APs in the physical layer of this architecture, we enable information-gathering without causing overhead to the wireless medium. Additionally, we employ Q-learning-based TPC in the Brain Layer to find the best configuration given the current situation. Consequently, we accomplish real-time monitoring and management thanks to the digital twin. Then, we evaluate the performance of the proposed approach through total interference and throughput metrics over the increasing number of users. Furthermore, we show that the proposed DTWN model outperforms existing schemes.