About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Industrial Networks and Intelligent Systems. 9th EAI International Conference, INISCOM 2023, Ho Chi Minh City, Vietnam, August 2-3, 2023, Proceedings

Research Article

Performance Analysis of Distributed Learning in Edge Computing on Handwritten Digits Dataset

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-47359-3_12,
        author={Tinh Phuc Vo and Viet Anh Nguyen and Xuyen Bao Le Nguyen and Duc Ngoc Minh Dang and Anh Khoa Tran},
        title={Performance Analysis of Distributed Learning in Edge Computing on Handwritten Digits Dataset},
        proceedings={Industrial Networks and Intelligent Systems. 9th EAI International Conference, INISCOM 2023, Ho Chi Minh City, Vietnam, August 2-3, 2023, Proceedings},
        proceedings_a={INISCOM},
        year={2023},
        month={10},
        keywords={Edge Computing Split Computing Deep Neural Networks computation offloading},
        doi={10.1007/978-3-031-47359-3_12}
    }
    
  • Tinh Phuc Vo
    Viet Anh Nguyen
    Xuyen Bao Le Nguyen
    Duc Ngoc Minh Dang
    Anh Khoa Tran
    Year: 2023
    Performance Analysis of Distributed Learning in Edge Computing on Handwritten Digits Dataset
    INISCOM
    Springer
    DOI: 10.1007/978-3-031-47359-3_12
Tinh Phuc Vo1, Viet Anh Nguyen2, Xuyen Bao Le Nguyen2, Duc Ngoc Minh Dang2,*, Anh Khoa Tran1
  • 1: Modeling Evolutionary Algorithms Simulation and Artificial Intelligence, Faculty of Electrical and Electronics Engineering
  • 2: Computing Fundamental Department
*Contact email: ducdnm2@fe.edu.vn

Abstract

Deep learning models often consist of millions or even billions of parameters, making it challenging to deploy them on devices with limited resources. Therefore, this study presents scenarios to assess the computational capability of edge devices to provide an evaluation of the learning performance of distributed learning methods. It focuses on using Deep Neural Network and the handwritten digit dataset (MNIST) in edge computing to evaluate the performance of distributed learning methods (no-offloading, full-offloading, split computing, and federated computing) in both ideal and realistic conditions. The performance evaluations are based on Precision, Recall, Accuracy, F1-score, and Estimated time complexity. The findings indicate that the full-offloading method achieved the highest performance in ideal conditions. However, in realistic situations, the split computing and federated computing methods performed better than the others.

Keywords
Edge Computing Split Computing Deep Neural Networks computation offloading
Published
2023-10-31
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-031-47359-3_12
Copyright © 2023–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL