
Research Article
Performance Analysis of Distributed Learning in Edge Computing on Handwritten Digits Dataset
@INPROCEEDINGS{10.1007/978-3-031-47359-3_12, author={Tinh Phuc Vo and Viet Anh Nguyen and Xuyen Bao Le Nguyen and Duc Ngoc Minh Dang and Anh Khoa Tran}, title={Performance Analysis of Distributed Learning in Edge Computing on Handwritten Digits Dataset}, proceedings={Industrial Networks and Intelligent Systems. 9th EAI International Conference, INISCOM 2023, Ho Chi Minh City, Vietnam, August 2-3, 2023, Proceedings}, proceedings_a={INISCOM}, year={2023}, month={10}, keywords={Edge Computing Split Computing Deep Neural Networks computation offloading}, doi={10.1007/978-3-031-47359-3_12} }
- Tinh Phuc Vo
Viet Anh Nguyen
Xuyen Bao Le Nguyen
Duc Ngoc Minh Dang
Anh Khoa Tran
Year: 2023
Performance Analysis of Distributed Learning in Edge Computing on Handwritten Digits Dataset
INISCOM
Springer
DOI: 10.1007/978-3-031-47359-3_12
Abstract
Deep learning models often consist of millions or even billions of parameters, making it challenging to deploy them on devices with limited resources. Therefore, this study presents scenarios to assess the computational capability of edge devices to provide an evaluation of the learning performance of distributed learning methods. It focuses on using Deep Neural Network and the handwritten digit dataset (MNIST) in edge computing to evaluate the performance of distributed learning methods (no-offloading, full-offloading, split computing, and federated computing) in both ideal and realistic conditions. The performance evaluations are based on Precision, Recall, Accuracy, F1-score, and Estimated time complexity. The findings indicate that the full-offloading method achieved the highest performance in ideal conditions. However, in realistic situations, the split computing and federated computing methods performed better than the others.