Federated Unlearning for On-Device Recommendation

2025-05-06 0 0 2.77MB 9 页 10玖币
侵权投诉
Federated Unlearning for On-Device Recommendation
Wei Yuan
The University of Queensland
Brisbane, Australia
w.yuan@uq.edu.au
Hongzhi Yin
The University of Queensland
Brisbane, Australia
h.yin1@uq.edu.au
Fangzhao Wu
Microsoft Research Asia
Beijing, China
wufangzhao@gmail.com
Shijie Zhang
Tencent
Shenzhen, China
julysjzhang@tencent.com
Tieke He
Nanjing University
Nanjing, China
hetieke@gmail.com
Hao Wang
Alibaba Cloud, Alibaba Group
Hangzhou, China
cashenry@126.com
ABSTRACT
The increasing data privacy concerns in recommendation systems
have made federated recommendations attract more and more at-
tention. Existing federated recommendation systems mainly focus
on how to eectively and securely learn personal interests and
preferences from their on-device interaction data. Still, none of
them considers how to eciently erase a user’s contribution to
the federated training process. We argue that such a dual setting
is necessary. First, from the privacy protection perspective, “the
right to be forgotten (RTBF)” requires that users have the right to
withdraw their data contributions. Without the reversible ability,
federated recommendation systems risk breaking data protection
regulations. On the other hand, enabling a federated recommender
to forget specic users can improve its robustness and resistance
to malicious clients’ attacks.
To support user unlearning in federated recommendation sys-
tems, we propose an ecient unlearning method FRU (Federated
Recommendation Unlearning), inspired by the log-based rollback
mechanism of transactions in database management systems. It
removes a user’s contribution by rolling back and calibrating the
historical parameter updates and then uses these updates to speed
up federated recommender reconstruction. However, storing all
historical parameter updates on resource-constrained personal de-
vices is challenging and even infeasible. In light of this challenge,
we propose a small-sized negative sampling method to reduce the
number of item embedding updates and an importance-based up-
date selection mechanism to store only important model updates.
To evaluate the eectiveness of FRU, we propose an attack method
to disturb federated recommenders via a group of compromised
users. Then, we use FRU to recover recommenders by eliminating
these users’ inuence. Finally, we conduct extensive experiments
on two real-world recommendation datasets (i.e. MovieLens-100k
Corresponding author.
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specic permission and/or a
fee. Request permissions from permissions@acm.org.
Conference acronym ’XX, June 03–05, 2018, Woodstock, NY
©2023 Association for Computing Machinery.
ACM ISBN 978-1-4503-XXXX-X/18/06. . . $15.00
https://doi.org/XXXXXXX.XXXXXXX
and Steam-200k) with two widely used federated recommenders to
show the eciency and eectiveness of our proposed approaches.
CCS CONCEPTS
Information systems Collaborative ltering.
KEYWORDS
Federated Recommender System, Machine Unlearning
ACM Reference Format:
Wei Yuan, Hongzhi Yin, Fangzhao Wu, Shijie Zhang, Tieke He, and Hao
Wang. 2023. Federated Unlearning for On-Device Recommendation. In
Proceedings of Make sure to enter the correct conference title from your rights
conrmation emai (Conference acronym ’XX). ACM, New York, NY, USA,
9 pages. https://doi.org/XXXXXXX.XXXXXXX
1 INTRODUCTION
Recommender Systems (RS) suggest the most appropriate items and
services to users by analyzing the collected personal data, e.g. user-
item interactions and user proles [
7
,
27
,
47
]. With the growing
awareness of privacy and the recent publishing of data privacy pro-
tection regulations such as the General Data Protection Regulation
(GDPR) [
36
] in the European Union and the California Consumer
Privacy Act (CCPA) [
13
] in the United States, collecting, storing,
and using users’ data is becoming harder. To address the above
challenges, more and more researchers focus on applying federated
learning [
28
] to recommendation systems (FedRecs), which train
recommender models on client devices without sharing user data
with a central server or other clients.
Since Ammad et al. [
1
] proposed the rst federated recommen-
dation framework, FedRecs have made great advancements re-
cently [
18
]. For example, Muhammad et al. [
29
] proposed FedFast
to accelerate training convergence. Lin et al. [
23
] investigated how
to exploit explicit feedback. Liang et al. [
22
] attempted to improve
the security of FedRecs via denoising techniques. Wu et al. [
40
]
incorporated GNN into a general FedRec framework.
Despite great advancements made in this area, it has been un-
explored how to forget specic users during the FedRec training
process. Without the ability to erase specic users’ contributions to
the federated training process, FedRec might break privacy protec-
tion laws or regulations such as CCPA and GDPR that give users
the right to control and withdraw their data at any time. Apart from
reducing the risks of breaking privacy protection rules, implement-
ing unlearning is also important to improve FedRec’s robustness
arXiv:2210.10958v2 [cs.IR] 3 Dec 2022
Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Wei Yuan, Hongzhi Yin, Fangzhao Wu, Shijie Zhang, Tieke He, and Hao Wang
and resistance to malicious attacks in an open setting where any
user/device can participate in the training process of FedRec. Re-
cent studies [
33
,
46
] have shown that current FedRecs are still not
“safe” enough when facing malicious users’ attacks. After detecting
such attacks, the ability to eciently erase such malicious users’
inuence without retraining from scratch is essential for FedRecs.
Although some recent works [
5
,
21
] tried to apply machine un-
learning to recommender systems because of data privacy concerns,
all of them focus on the traditional centralized recommenders. Their
methods require accessing the whole training data during unlearn-
ing, which is prohibitive in FedRecs. Some works [
24
26
,
41
] ex-
plored unlearning in federated learning, however, they are tailored
for classication tasks in Computer Vision (CV) area. To erase the
contributions of target clients, the most naive yet eective method
is to retrain the recommender model from scratch after removing
the target clients, which is infeasible in the real-world recommen-
dation setting due to its huge time and resource costs. Another
alternative is to continue the training after removing the target
clients. However, such a method cannot guarantee whether and
when these target users’ inuence on the global parameters (e.g.
item embeddings) will be erased. As a result, how to eectively
and eciently erase target clients’ contributions is not trivial in
FedRecs.
Inspired by the log-based rollback mechanism of transactions
in database management systems (DBMS), we propose to record
each client’s historical model updates. Once we need to erase some
users’ contributions, we will roll back and reconstruct the other
clients’ models according to their training logs (i.e. their histori-
cal model updates). To achieve that, the most intuitive way is to
keep all clients’ historical model updates at the central server. This
method can work when the number of clients is small, such as in
the federated classication setting in which there are only tens of
clients [
24
]. However, in FedRec, the number of clients is several
orders of magnitude larger than the classication settings. As the
storage costs at the central server increase linearly with the number
of clients, this naive method is unsustainable and impractical for
FedRec. Therefore, we propose to retain historical model updates
at each client’s local device, and storage costs at each client device
decouple with the number of clients. Still, it is non-trivial to store
each client’s historical model updates on the resource-constrained
device, and it is infeasible to simply store all updates.
In this paper, we propose FRU (Federated Recommendation
Unlearning), a simple yet eective federated recommendation un-
learning method. FRU is model-agnostic and can be applied to
most federated recommendation systems. The basic idea of FRU is
to erase a target client’s inuence by revising FedRec’s historical
updates and leveraging the revised updates to speed up FedRec
reconstruction. Compared with completely retraining (reconstruct-
ing) the FedRec from scratch, FRU requires less running time and
achieves even better model performance. FRU stores each client’s
historical updates locally on decentralized personal devices to avoid
high central server storage overhead. To eciently utilize the lim-
ited storage space on each client’s device, we design two novel
components: a user-item mixed semi-hard negative sampling com-
ponent and an importance-based update selection component. The
user-item mixed negative sampling exploits high-quality negative
samples to train FedRec, reaching comparable model performance
with fewer negative samples than the traditional sampling method.
Consequently, it reduces the size of item embedding updates at
each client. The importance-based update selection component dy-
namically chooses important updates to store on each client device
at each training epoch, instead of storing all parameter updates.
After achieving unlearning in FedRec, evaluating the eective-
ness of unlearning is neither easy because there are a large num-
ber of users in recommendation datasets, and many of them have
common items and similar preferences (actually, it is the base of
CF-based recommendation methods). Deleting a small portion of
normal users/clients will not signicantly change the performance
of FedRec. In light of this, we propose an attack method to destroy
the FedRec with a group of compromised clients/users (also called
malicious users). An eective unlearning method should recover the
destroyed FedRec quickly and achieve comparable or even better
performance than training without malicious users.
To demonstrate the eectiveness of our proposed approach, we
choose two commonly used recommenders [
45
], Neural Collab-
orative Filtering (NCF) [
17
] and LightGCN [
16
], with the most
basic federated learning protocol [
1
] as our base models. Then, we
conduct extensive experiments with these base models on two real-
world recommendation datasets, MovieLens-100k and Steam-200k.
The experimental results show that FRU can erase the inuence of
removed malicious users, with at least
7
x speedup compared with
the naive retraining from scratch.
The main contributions of this paper are summarized as follows:
To the best of our knowledge, this is the rst work to investi-
gate machine unlearning in federated recommender systems,
enabling FedRecs to eectively erase the inuence of specic
users/clients and eciently nish recovering.
We propose FRU, an unlearning method tailored for FedRecs.
It stores each client’s historical changes locally on their de-
vices. To improve the storage space eciency on resource-
constrained devices, we propose a novel negative sampling
method and an importance-based update selection mech-
anism. Then, FRU rolls back FedRecs to erase the target
users’/clients’ inuence and fast recover FedRecs by calibrat-
ing the historical model updates.
We design an attack method to intuitively evaluate the un-
learning eectiveness. The experimental results demonstrate
the eectiveness and eciency of FRU on two real-world
datasets. Comprehensive ablation studies reveal the eec-
tiveness and importance of each technical component in
FRU.
2 PRELIMINARIES
2.1 Federated Recommendation
Let
V
and
U
denote the sets of items and users (clients), respec-
tively. The sizes of items and users are
|V|
and
|U|
. Each user
𝑢𝑖
owns a local training dataset
D𝑖
, which contains user-item interac-
tions
(𝑢𝑖, 𝑣 𝑗, 𝑟𝑖 𝑗 )
.
𝑟𝑖 𝑗 =1
represents
𝑢𝑖
has interacted with item
𝑣𝑗
,
and
𝑟𝑖 𝑗 =0
means no interactions exist between
𝑢𝑖
and
𝑣𝑗
(i.e. neg-
ative samples). We denote the set of all negative instances for
𝑢𝑖
as
V
𝑛𝑒𝑔 (𝑖)
. The federated recommender is trained to predict the score
of
^
𝑟𝑖 𝑗
between
𝑢𝑖
and all non-interacted items. And then, according
to the predicted scores
^
𝑟𝑖 𝑗 ∈ [0,1]
, the federated recommender
摘要:

FederatedUnlearningforOn-DeviceRecommendationWeiYuanTheUniversityofQueenslandBrisbane,Australiaw.yuan@uq.edu.auHongzhiYin∗TheUniversityofQueenslandBrisbane,Australiah.yin1@uq.edu.auFangzhaoWuMicrosoftResearchAsiaBeijing,Chinawufangzhao@gmail.comShijieZhangTencentShenzhen,Chinajulysjzhang@tencent.com...

展开>> 收起<<
Federated Unlearning for On-Device Recommendation.pdf

共9页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!

相关推荐

分类:图书资源 价格:10玖币 属性:9 页 大小:2.77MB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 9
客服
关注