GROUP PERSONALIZED FEDERATED LEARNING Zhe Liu Yue Hui Fuchun Peng Meta AI Menlo Park CA USA

2025-05-06
0
0
248.62KB
5 页
10玖币
侵权投诉
GROUP PERSONALIZED FEDERATED LEARNING
Zhe Liu, Yue Hui, Fuchun Peng
Meta AI, Menlo Park, CA, USA
ABSTRACT
Federated learning (FL) can help promote data privacy by training a
shared model in a de-centralized manner on the physical devices of
clients. In the presence of heterogeneous distributions of local data,
personalized FL strategy is introduced to mitigate the potential client
drift. In this paper, we present the group personalization approach
for applications of FL in which there exist inherent partitions over
clients that are significantly distinct. In our approach, the global FL
model is fine-tuned through another FL training process over each
homogeneous group of clients, after which each group-specific FL
model is further adapted and personalized per client. The proposed
method can be well interpreted from a Bayesian hierarchical mod-
eling perspective. With experiments on two real-world datasets, we
demonstrate this approach can achieve superior personalization per-
formance than other FL counterparts.
Index Terms—Federated learning, personalization, language
modeling
1. INTRODUCTION
In recent years, there has been a rise in the popularity of a distributed
learning technique called federated learning (FL) [1, 2, 3]. FL has
been applied in many fields including recommendation [4], smart
keyboard suggestion [5, 6], keyword spotting [7], health care [8],
and automatic speech recognition (ASR) [9, 10, 11].
FL can help promote data privacy by training a shared model
in a de-centralized manner on users’ local devices, so that raw data
stays on physical devices. Specifically, FL distributes the training
process among a large number of client devices, with each client
device learning from private data and calculating model updates in-
dependently, then uploading those updates to a central server for ag-
gregation. The updated model will later be delivered to each client
device, after which this procedure is repeated until convergence.
The vanilla FL approach faces challenges in the presence of
highly heterogeneous local data distributions. The personalized FL
strategy seeks to address such performance issue and mitigate the
potential client drift [12, 13, 14]. Particularly, a two-step “global
FL training + local fine-tuning” method is commonly adopted for
personalization, where the trained global FL model is personalized
for each FL client. This is done through a local adaptation step that
involves additional training on each local dataset [15, 16].
However, this two-step federated personalization approach has
limitations when a majority of users only have a few training exam-
ples, which is common in practice due to long-tailed skewed distri-
butions of user data. Fine-tuning a large global FL model on insuffi-
cient personal data may not improve the performance for individual
clients or tends to suffer from overfitting.
For applications where there exist inherently partitioned groups
among clients, each client can leverage the extra knowledge, learned
from the training records of other clients in their group, and enhance
their own personalized model. This procedure should also be con-
ducted in a FL framework since raw data has to stay on devices.
In this paper, we present a novel three-step “global FL training
+ group FL fine-tuning + local personalization” approach. Specifi-
cally, it firstly follows the general FL training process where a single
global FL model is learned. Then this trained global model is fine-
tuned through another FL training process over each homogeneous
group of clients. Finally, each group-level model is further adapted
and personalized using the private data per client.
Our work mainly makes the following technical contributions:
(1) proposing group personalized FL, an effective approach for inte-
grating global aggregation, group-level knowledge sharing, and lo-
cal training; (2) interpreting the proposed procedure from a Bayesian
hierarchical modeling perspective; and (3) evaluating on real-world
datasets for language modeling task, which achieves improved per-
sonalization results.
The rest of the paper is organized as follows. We review related
work in Section 2. Section 3 presents the proposed method of group
personalized FL. Section 4 interprets the presented procedure from
a Bayesian hierarchical modeling perspective. Section 5 shows the
experiments on two real-world datasets. We conclude in Section 6.
2. RELATED WORK
Recently, there is an emerging line of research that develops cluster-
ing strategies for clients in the FL settings [17, 18, 19, 20, 21, 22, 23].
Particularly, previous work in [21] and [23] proposes to apply FL on
a hierarchical architecture and explores the potential benefits of us-
ing it to address privacy-related issues. Authors in [17] present an
iterative clustering algorithm which estimates the cluster identities of
the clients and optimizes model parameters for the clusters. Another
approach [18, 22] groups the training of clients based on the similari-
ties between the clients’ optimization directions. Moreover, paper of
[20] introduces a multi-center aggregation mechanism which learns
multiple global models from data, and simultaneously derives the
optimal matching between clients and centers.
As most of existing literature focuses on clustering algorithms
over clients, our work mainly investigate how the group or cluster
information can be efficiently utilized for improving personalization
performance. To the best of our knowledge, our work is the first that
provides an empirical study on combining group or cluster based FL
with personalization. While our paper mainly investigates the use of
group information for enhancing personalized FL, the comparison
of various clustering algorithms for inferring the groups of clients is
beyond the scope of this work.
3. GROUP PERSONALIZED FL
In this section, we present group personalized FL, which is a three-
step method consisting of global FL training, group FL fine-tuning,
and local personalization.
arXiv:2210.01863v2 [stat.ML] 11 Oct 2022
摘要:
展开>>
收起<<
GROUPPERSONALIZEDFEDERATEDLEARNINGZheLiu,YueHui,FuchunPengMetaAI,MenloPark,CA,USAABSTRACTFederatedlearning(FL)canhelppromotedataprivacybytrainingasharedmodelinade-centralizedmanneronthephysicaldevicesofclients.Inthepresenceofheterogeneousdistributionsoflocaldata,personalizedFLstrategyisintroducedtom...
声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
相关推荐
-
VIP免费2024-11-14 22
-
VIP免费2024-11-23 3
-
VIP免费2024-11-23 4
-
VIP免费2024-11-23 3
-
VIP免费2024-11-23 4
-
VIP免费2024-11-23 28
-
VIP免费2024-11-23 11
-
VIP免费2024-11-23 21
-
VIP免费2024-11-23 12
-
VIP免费2024-11-23 5
分类:图书资源
价格:10玖币
属性:5 页
大小:248.62KB
格式:PDF
时间:2025-05-06
作者详情
-
IMU2CLIP MULTIMODAL CONTRASTIVE LEARNING FOR IMU MOTION SENSORS FROM EGOCENTRIC VIDEOS AND TEXT NARRATIONS Seungwhan Moon Andrea Madotto Zhaojiang Lin Alireza Dirafzoon Aparajita Saraf10 玖币0人下载
-
Improving Visual-Semantic Embedding with Adaptive Pooling and Optimization Objective Zijian Zhang1 Chang Shu23 Ya Xiao1 Yuan Shen1 Di Zhu1 Jing Xiao210 玖币0人下载