About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Quality, Reliability, Security and Robustness in Heterogeneous Systems. 19th EAI International Conference, QShine 2023, Shenzhen, China, October 8 – 9, 2023, Proceedings, Part II

Research Article

Federated Learning Optimization Algorithm Based on Dynamic Client Scale

Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-031-65123-6_8,
        author={Luya Wang and Wenliang Feng and Ruoheng Luo},
        title={Federated Learning Optimization Algorithm Based on Dynamic Client Scale},
        proceedings={Quality, Reliability, Security and Robustness in Heterogeneous Systems. 19th EAI International Conference, QShine 2023, Shenzhen, China, October 8 -- 9, 2023, Proceedings, Part II},
        proceedings_a={QSHINE PART 2},
        year={2024},
        month={8},
        keywords={Adaptive optimization Federated learning Large batch training},
        doi={10.1007/978-3-031-65123-6_8}
    }
    
  • Luya Wang
    Wenliang Feng
    Ruoheng Luo
    Year: 2024
    Federated Learning Optimization Algorithm Based on Dynamic Client Scale
    QSHINE PART 2
    Springer
    DOI: 10.1007/978-3-031-65123-6_8
Luya Wang, Wenliang Feng, Ruoheng Luo,*
    *Contact email: neal@szu.edu.cn

    Abstract

    Federated learning methods typically learn models from the local iterative updates of a large number of clients. The interest in the impact of client quantity on the training dynamics of federated learning algorithms has been growing in recent years. Increasing the client scale during the training process not only improves data parallelism efficiency but also accelerates the training of federated learning. When optimizing models using a large number of clients, the learning rate needs to adapt to the client scale and the aggregation of updates in order to maximize speed while maintaining model quality. However, the current approach mainly relies on empirically-derived linear learning rate scaling rules, which cannot adapt to the dynamic client scale in federated learning. In this regard, we propose ASNES, an algorithm that dynamically adapts to the client scale in federated learning. By continuously adapting to the client quantity and aggregation of updates, ASNES achieves acceleration for different client scales. In experimental evaluations, ASNES demonstrates favorable performance compared to other benchmark algorithms.

    Keywords
    Adaptive optimization Federated learning Large batch training
    Published
    2024-08-20
    Appears in
    SpringerLink
    http://dx.doi.org/10.1007/978-3-031-65123-6_8
    Copyright © 2023–2025 ICST
    EBSCOProQuestDBLPDOAJPortico
    EAI Logo

    About EAI

    • Who We Are
    • Leadership
    • Research Areas
    • Partners
    • Media Center

    Community

    • Membership
    • Conference
    • Recognition
    • Sponsor Us

    Publish with EAI

    • Publishing
    • Journals
    • Proceedings
    • Books
    • EUDL