
Research Article
FedFR: Evaluation and Selection of Loss Functions for Federated Face Recognition
@INPROCEEDINGS{10.1007/978-3-031-24383-7_6, author={Ertong Shang and Zhuo Yang and Hui Liu and Junzhao Du and Xingyu Wang}, title={FedFR: Evaluation and Selection of Loss Functions for Federated Face Recognition}, proceedings={Collaborative Computing: Networking, Applications and Worksharing. 18th EAI International Conference, CollaborateCom 2022, Hangzhou, China, October 15-16, 2022, Proceedings, Part I}, proceedings_a={COLLABORATECOM}, year={2023}, month={1}, keywords={Federated learning Face recognition Loss function Metric learning}, doi={10.1007/978-3-031-24383-7_6} }
- Ertong Shang
Zhuo Yang
Hui Liu
Junzhao Du
Xingyu Wang
Year: 2023
FedFR: Evaluation and Selection of Loss Functions for Federated Face Recognition
COLLABORATECOM
Springer
DOI: 10.1007/978-3-031-24383-7_6
Abstract
With growing concerns about data privacy and the boom in mobile and ubiquitous computing, federated learning, as an emerging privacy-preserving collaborative computing approach, has been receiving widespread attention recently. In this context, many clients collaboratively train a shared global model under the orchestration of a remote server, while keeping the training data localized. To achieve better federated learning performance, the majority of existing works have focused on designing advanced learning algorithms, such as server-side parameter aggregation policies. However, the local optimization on client devices, especially selecting an appropriate loss function for local training, has not been well studied. To fill this gap, we construct a federated face recognition prototype system and test five classical metric learning methods(i.e. loss functions) in this system, comparing their practical performance in terms of the global model accuracy, communication cost, convergence rate, and resource occupancy. Extensive empirical studies demonstrate that the relative performance between these approaches varies greatly in different federated scenarios. Specifically, when the number of categories to recognize on each client is large, using the classification-based loss function can make a better global model faster with less communication cost; while when there are only a few classes on each client, using the pair-based method can be more communication-efficient and obtain higher accuracy. Finally, we interpret this phenomenon from the perspective of similarity optimization and offer some suggestions on making suitable choices amongst various loss functions.