Do you pay for Privacy in Online learning Giorgia Ramponi GIORGIA .RAMPONI AI.ETHZ .CH

2025-08-18 0 0 99.53KB 5 页 10玖币
侵权投诉
arXiv:2210.04817v1 [cs.LG] 10 Oct 2022
Proceedings of Machine Learning Research vol xxx:15, 2022
Do you pay for Privacy in Online learning?
Giorgia Ramponi GIORGIA.RAMPONI@AI.ETHZ.CH
ETH AI Center, Zurich, Switzerland
Amartya Sanyal AMARTYA.SANYAL@AI.ETHZ.CH
ETH AI Center, Zurich, Switzerland
Online learning, in the mistake bound model, is one of the most fundamental concepts in learn-
ing theory. Differential privacy, instead, is the most widely used statistical concept of privacy in
the machine learning community. It is thus clear that defining learning problems that are online
differentially privately learnable is of great interest. In this paper, we pose the question on if the two
problems are equivalent from a learning perspective, i.e., is privacy for free in the online learning
framework?
Keywords: Online Learning, Differential Privacy, Mistake Bound Model
1. Introduction
Online learning, in the mistake bound model, is one of the most fundamental concepts in learning
theory. Let X=SXnbe the instance space. The learner, A, in this model, receives at each
timestep tan unlabelled example xtX, predicts a label bytcorresponding to xt, and then re-
ceives the true label ytfor xt. During this interaction, the learner maintains a working hypothesis
ht=A{(xτ, yτ)}t1
τ=1, which it uses to predict byt=ht(xt), and then uses the true label ytto up-
date the working hypothesis to ht+1. The performance of the learner Ais measured by the number
of mistakes it makes, i.e.,:
Mistakes (A, T, (xt, yt)
t=1) :=
T
X
t=1
(ht(xt)6=yt).(1)
Given this definition of performance, a hypothesis class Con the instance space X=SXnis said
to be online learnable in the mistake bound model if there exists a learner Lthat makes at most
poly (n, size (c)) mistakes on any sequence of samples consistent with a concept c∈ C, where pis
some polynomial. This is also known as the realisable setting.
Another relevant concept in learning theory is privacy. The most widely used statistical notion
of privacy in the machine learning literature is differential privacy. An (ǫ, δ)-differentially pri-
vate (randomised) algorithm is guaranteed to output similar distributions over the output space of
the algorithm when presented with inputs that only differ in one element. More formally, in the of-
fline setting, a learning algorithm A:X → Y is said to be (ǫ, δ)-differentially private if, for any two
datasets S1, S2that differ in just one element, we have that P[A(S1)Q]eǫP[A(S2)Q] + δ
where Q⊆ Y is any subset of the output space of the algorithm. We define differential privacy in
the online setting in Definition 3.
Some previous works (Jain et al.,2012;Agarwal and Singh,2017;Abernethy et al.,2019) treat
the problem of constructing online learning algorithms (mostly in the regret minimization setting
(Shalev-Shwartz and Singer,2007)) maintaining the differential-privacy properties. However, it is
© 2022 G. Ramponi & A. Sanyal.
摘要:

arXiv:2210.04817v1[cs.LG]10Oct2022ProceedingsofMachineLearningResearchvolxxx:1–5,2022DoyoupayforPrivacyinOnlinelearning?GiorgiaRamponiGIORGIA.RAMPONI@AI.ETHZ.CHETHAICenter,Zurich,SwitzerlandAmartyaSanyalAMARTYA.SANYAL@AI.ETHZ.CHETHAICenter,Zurich,SwitzerlandOnlinelearning,inthemistakeboundmodel,ison...

展开>> 收起<<
Do you pay for Privacy in Online learning Giorgia Ramponi GIORGIA .RAMPONI AI.ETHZ .CH.pdf

共5页,预览1页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:5 页 大小:99.53KB 格式:PDF 时间:2025-08-18

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 5
客服
关注