Towards Robust Recommender Systems via Triple Cooperative Defense Qingyang Wang1 Defu Lian1 Chenwang Wu1 and Enhong Chen1

2025-05-06 0 0 547.73KB 15 页 10玖币
侵权投诉
Towards Robust Recommender Systems via
Triple Cooperative Defense
Qingyang Wang1, Defu Lian1, Chenwang Wu1, and Enhong Chen1
University Of Science And Technology Of China, 96 Jinzhai Road, Hefei, Anhui, China
greensun@mail.ustc.edu.cn, liandefu@ustc.edu.cn,
wcw1996@mail.ustc.edu.cn, cheneh@ustc.edu.cn
Abstract.
Recommender systems are often susceptible to well-crafted
fake profiles, leading to biased recommendations. The wide application
of recommender systems makes studying the defense against attack nec-
essary. Among existing defense methods, data-processing-based methods
inevitably exclude normal samples, while model-based methods strug-
gle to enjoy both generalization and robustness. Considering the above
limitations, we suggest integrating data processing and robust model
and propose a general framework, Triple Cooperative Defense (TCD),
which cooperates to improve model robustness through the co-training
of three models. Specifically, in each round of training, we sequentially
use the high-confidence prediction ratings (consistent ratings) of any
two models as auxiliary training data for the remaining model, and the
three models cooperatively improve recommendation robustness. Notably,
TCD adds pseudo label data instead of deleting abnormal data, which
avoids the cleaning of normal data, and the cooperative training of the
three models is also beneficial to model generalization. Through extensive
experiments with five poisoning attacks on three real-world datasets,
the results show that the robustness improvement of TCD significantly
outperforms baselines. It is worth mentioning that TCD is also beneficial
for model generalizations.
Keywords: Recommender Systems, Model Robustness, Poisoning Attacks
1Introduction
In recent years, with the rapid development of Internet technology, the amount
of information on the Internet has shown explosive growth. To obtain valuable
information from massive data information more quickly and effectively, “recom-
mender systems”[
2
] came into being and quickly gained extensive attention and
practical application in academia and industry. Recommender algorithms mine
the content that the user is interested in from a large amount of data by using
information such as user behavior and item characteristics and presenting it to
the user in a list[
15
]. Their superiority and commercial background make them
widely used in various industries [2,5,20].
Corresponding author
arXiv:2210.13762v1 [cs.LG] 25 Oct 2022
However, the recommender system also faces the test of severe security
problems while providing convenience for our lives. Since the collaborative filtering
method works based on user profile information, it is easily affected by false
user profile information. Studies [
18
,
21
,
33
] have long shown that recommender
systems, especially those in the field of sales and scoring, systematically interfere
with the user ratings included in the system, which will also impact users’ purchase
behavior and system recommendation results [
5
]. And even if attackers do not
know the algorithm or implementation details used by the recommendation
system, only using small-scale misleading data, can also have obvious interference
effects on the normal recommendation behavior of the system, (e.g., in 2002,
after receiving a complaint, Amazon found that when a website recommends a
Christian classic, another irrelevant book will be recommended simultaneously,
which is caused by malicious users using deceptive means [22]).
Two main defense methods against poisoning attacks are data-processing-
based defense and model-based defense [
7
,
34
]. Data-based defense tries to study
the characteristics of poisoning attacks, strip fake profiles, and purify datasets
before the training of recommender systems. However, to pursue high recall,
these methods will inevitably delete normal data, which will lead to biased
recommendations. Model-based defense improves the robustness of the recom-
mendation algorithm itself, and adversarial training [
24
] is recognized as the most
popular and effective model-based defense method to enhance recommendation
robustness [
34
]. This method maximizes recommendation error while minimizing
the model’s empirical risk by adding adversarial perturbations to the model
parameters, eventually building robust models in adversarial games. Although
adversarial training can significantly improve the robustness of the recommender
system, it is difficult to control the strength of adversarial noise, which results in
reducing the generalization of the recommendation to a certain extent. Besides,
a recent study has shown that adversarial training with perturbations added
to model parameters cannot well resist poisoning attacks [
34
]. Therefore, it is
very needed to design a suitable means to integrate them and make use of their
strengths and avoid weaknesses.
Based on the shortcomings mentioned above, we propose a novel defense
method that integrates data processing and model robustness boosting, Triple
Cooperative Defense(TCD), to enhance the robustness of recommender systems.
Specifically, in each round of training, we sequentially use the high-confidence
prediction ratings (consistent ratings) of any two models as auxiliary training data
for the remaining models, and the three models cooperatively improve recommen-
dation robustness. The proposed strategy is based on the following considerations.
In the recommender system, extremely sparse user-item interactions are difficult
to support good model training, leading to models that are easily misled by
malicious profiles. Besides, recent work also emphasizes that the model’s robust-
ness requires more real data[
34
]. Therefore, we make reasonable use of cheap
pseudo-labels. Obviously, pseudo-labels must be guaranteed by high-confidence
ratings, but in the explicit feedback-based recommender system that we focus
on, the predicted value is the rating, not the confidence. To this end, we suggest
training with three models and any two models’ consistent prediction ratings
as auxiliary training data for the third model. Model robustness is improved in
data augmentation and co-training of the three models. Notably, we do not cull
the data nor modify the individual model structure, which can overcome the
shortcomings of existing defense methods. Through extensive experiments with
five poisoning attacks on three real-world datasets, the results show that the
robustness improvement of TCD significantly outperforms baselines. It is worth
mentioning that TCD also improves model generalization.
The main contributions of this work are summarized as follows:
the proposal of a novel robust training strategy, named Triple Cooperative
Defense, by generating pseudo labels into the recommender system for elimi-
nating the damage of malicious profiles to models, and training three models
cooperatively for improving model robustness. It is noteworthy that this is
the first algorithm to combine data-processing-based defense and model-based
defense in recommender systems.
an extensive study of co-training (defensive) methods to robustify the rec-
ommendation performance through the analysis of five attacks and three
recommendation datasets. The results verify that our method enhances the
robustness of the recommendation while ensuring generalization.
2Related Work
2.1Security of Recommender Systems
Many issues about security and privacy have been studied in recommender sys-
tems, which suggest that recommender systems are vulnerable [
8
,
29
], which
leads to developing a toolkit for evaluating robustness [
27
]. Earlier attacks in-
jected malicious profiles manually generated with little knowledge about the
recommender system, so it could not achieve satisfactory attack performance,
e.g., random attack[
17
] and average attack [
17
]. The training of model-based rec-
ommendation algorithms usually used backpropagation [
12
,
14
], so perturbations
were added along the gradient direction to perform the attack [
10
,
11
,
18
,
31
].
Inspired by the GAN’s application [
16
] in the recommendation, some work[
6
,
21
]
used GAN to generate real-like fake ratings to bypass the detection. With the
development of optimization algorithms, many works focused on attack specific
types of recommender systems and turned attacks into optimization problems of
deciding appropriate rating scores for users [
11
,
17
,
18
,
26
,
36
]. Moreover, some
works [
9
,
30
]treated the items’ ratings as actions and used reinforcement learning
to generate real-like fake ratings. Such optimization-based methods have strong
attack performance, so defense is needed to mitigate the harm of attack.
2.2Defense against Poisoning Attacks
According to the defense objective, a defense can be (i) reactive attack detection[
7
]
or (ii) proactive robust model construction, which will be listed below.
摘要:

TowardsRobustRecommenderSystemsviaTripleCooperativeDefenseQingyangWang1,DefuLian⋆1,ChenwangWu1,andEnhongChen1UniversityOfScienceAndTechnologyOfChina,96JinzhaiRoad,Hefei,Anhui,Chinagreensun@mail.ustc.edu.cn,liandefu@ustc.edu.cn,wcw1996@mail.ustc.edu.cn,cheneh@ustc.edu.cnAbstract.Recommendersystemsare...

展开>> 收起<<
Towards Robust Recommender Systems via Triple Cooperative Defense Qingyang Wang1 Defu Lian1 Chenwang Wu1 and Enhong Chen1.pdf

共15页,预览3页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:15 页 大小:547.73KB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 15
客服
关注