
Research Article
Minimizing Data Retrieval Delay in Edge Computing
@INPROCEEDINGS{10.1007/978-3-031-63992-0_5, author={Kolichala Rajashekar and Souradyuti Paul and Sushanta Karmakar and Subhajit Sidhanta}, title={Minimizing Data Retrieval Delay in Edge Computing}, proceedings={Mobile and Ubiquitous Systems: Computing, Networking and Services. 20th EAI International Conference, MobiQuitous 2023, Melbourne, VIC, Australia, November 14--17, 2023, Proceedings, Part II}, proceedings_a={MOBIQUITOUS PART 2}, year={2024}, month={7}, keywords={IoT edge computing reinforcement learning}, doi={10.1007/978-3-031-63992-0_5} }
- Kolichala Rajashekar
Souradyuti Paul
Sushanta Karmakar
Subhajit Sidhanta
Year: 2024
Minimizing Data Retrieval Delay in Edge Computing
MOBIQUITOUS PART 2
Springer
DOI: 10.1007/978-3-031-63992-0_5
Abstract
For real-time mission-critical applications such as forest fire detection, oil refinery monitoring, etc., the edge computing paradigm is heavily used to process data fetched from IoT devices spread over a considerably large geographical region. For such real-time edge computing applications working under stringent deadlines, the overallretrieval delay, i.e., the delay in fetching the data from the IoT devices to the edge servers, needs to be minimized; Otherwise, the retrieval delay in fetching the data from IoT devices distributed over such a large geographical region can be prohibitively large. To achieve the above goal, each IoT device must be assigned to a particular edge server while considering the relative positioning as per the topology of the edge cluster. We prove that the above assignment of IoT devices to an edge cluster, which we denote as the Edge Assignment Problem (EAP), is NP-Hard. Therefore, obtaining a polynomial time solution is infeasible. For the aboveEAPproblem, instead of performing both exploration and exploitation on the search space, state-of-the-art heuristic algorithms will only exploit the search space. As a result, these algorithms are unable to achieve an appreciably large reduction in the overall retrieval delay.
To that end, we propose a Deep Reinforcement Learning-based algorithm that is able to produce a near-optimal assignment of IoT devices to the edge cluster while ensuring that none of the edge servers is overloaded. We motivate and demonstrate our proposed algorithm with the use case of federated learning (FL) - a popular distributed machine learning paradigm that is based on the principle of edge computing such that the clients, i.e., edge servers, train local models from the data obtained from local IoT devices. These local models are further aggregated into a global model at an aggregator (the cloud/fog) by exchanging the model parameters instead of raw data. In that case, an optimal assignment of the IoT devices to each edge device is necessary for reducing the training time for the local models, which in turn, results in reducing the overall delay of federated learning. Using experiments that emulate real-world deployment scenarios, we demonstrate that our algorithm outperforms the state of the art in reducing the retrieval delay in the general edge computing scenario, while also minimizing the local training time in federated learning.