Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis Testing A Lesson From Fano

2025-04-30 0 0 1.22MB 14 页 10玖币
侵权投诉
Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis
Testing: A Lesson From Fano
Chuan Guo 1Alexandre Sablayrolles 1Maziar Sanjabi 1
Abstract
Differential privacy (DP) is by far the most widely
accepted framework for mitigating privacy risks
in machine learning. However, exactly how small
the privacy parameter
ϵ
needs to be to protect
against certain privacy risks in practice is still
not well-understood. In this work, we study data
reconstruction attacks for discrete data and ana-
lyze it under the framework of multiple hypoth-
esis testing. For a learning algorithm satisfying
(α, ϵ)
-R
´
enyi DP, we utilize different variants of
the celebrated Fano’s inequality to upper bound
the attack advantage of a data reconstruction ad-
versary. Our bound can be numerically computed
to calibrate the privacy parameter
ϵ
to the desired
level of privacy protection in practice, and comple-
ments the empirical evidence for the effectiveness
of DP against data reconstruction attacks even at
relatively large values of ϵ.
1. Introduction
As machine learning becomes increasingly ubiquitous in
the real world, proper understanding of the privacy risks
of ML also becomes a crucial aspect for its safe adoption.
Numerous prior works have demonstrated privacy vulnera-
bilities throughout the ML training pipeline (Shokri et al.,
2017;Song et al.,2017;Nasr et al.,2019;Zhu et al.,2019;
Carlini et al.,2021). So far the only comprehensive defense
against privacy attacks is differential privacy (DP; Dwork
et al. (2006)), which has been successfully adapted for train-
ing private ML models (Chaudhuri et al.,2011;Shokri &
Shmatikov,2015;Abadi et al.,2016).
Unfortunately, differentially private training also comes at a
huge cost to model accuracy if a small privacy parameter
ϵ
is desired. In contrast, in terms of the level of empirical pro-
1
Meta AI. Correspondence to: Chuan Guo
<
chuan-
guo@meta.com>.
Proceedings of the
40 th
International Conference on Machine
Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright
2023 by the author(s).
tection conferred by DP against privacy attacks, the picture
is much more optimistic: Across a wide range of attacks in-
cluding membership inference, attribute inference and data
reconstruction, even a small amount of DP noise is suffi-
cient for thwarting most attacks (Jayaraman & Evans,2019;
Zhu et al.,2019;Carlini et al.,2019;Hannun et al.,2021).
However, there is very little theoretical understanding of
this phenomenon.
In this paper, we analyze privacy leakage in connection to
the multiple hypothesis testing problem in information the-
ory, and show that the empirical privacy protection conferred
by DP with high
ϵ
may be more than previously thought. To
this end, we first define a game (see Figure 1) between a
private learner and an adversary that tries to perform a data
reconstruction attack against the learned model. We analyze
this game using the celebrated Fano’s inequality to derive
upper bounds on the adversary’s attack advantage when the
model is trained differentially privately.
Our analysis reveals an interesting and practically important
insight that the DP parameter
ϵ
should scale with the num-
ber of possible values
M
that the private data can take on.
When
M
is large, e.g.,
M= 1010
when extracting social
security numbers from a trained language model (Carlini
et al.,2019), even a relatively large
ϵ
has sufficient protec-
tion against data reconstruction attacks. More generally,
given an input data distribution and a DP parameter
ϵ
, we
give a numerical method for deriving an upper bound on
the advantage of an arbitrary data reconstruction adversary.
We empirically validate our bound against several existing
attacks and show that it can provide useful guidance for
selecting the appropriate value of ϵin practice.
Contributions. Our main contributions are the following:
1.
We formalize data reconstruction attacks for discrete data
as an attack game (section 3).
2.
We use Fano’s inequality to derive a numerical method
that upper bounds the adversary’s advantage for
(α, ϵ)
-
R´
enyi DP mechanisms (section 4 and section 5).
3.
We experimentally validate our advantage bound against
existing attack and show that it can be used to guide the
selection of ϵin practice (section 6).
1
arXiv:2210.13662v2 [cs.LG] 10 Aug 2023
Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis Testing: A Lesson From Fano
p2M1
<latexit sha1_base64="cm1fThmQXOHiNCN29p8OBn7Ef+8=">AAACBHicbVDLSsNAFL2pr1pfVZfdDBbBjSWRii6LunAjVLAPaGKZTCft0MkkzEyEErJw46+4caGIWz/CnX/j9LHQ1gMXDufcy733+DFnStv2t5VbWl5ZXcuvFzY2t7Z3irt7TRUlktAGiXgk2z5WlDNBG5ppTtuxpDj0OW35w8ux33qgUrFI3OlRTL0Q9wULGMHaSN1iyQ2xHvhBGmfIZQK5V5RrfJ/eHDtZt1i2K/YEaJE4M1KGGerd4pfbi0gSUqEJx0p1HDvWXoqlZoTTrOAmisaYDHGfdgwVOKTKSydPZOjQKD0URNKU0Gii/p5IcajUKPRN5/hkNe+Nxf+8TqKDcy9lIk40FWS6KEg40hEaJ4J6TFKi+cgQTCQztyIywBITbXIrmBCc+ZcXSfOk4lQrp7fVcu1iFkceSnAAR+DAGdTgGurQAAKP8Ayv8GY9WS/Wu/Uxbc1Zs5l9+APr8wcE4Jex</latexit>
.
.
.
<latexit sha1_base64="HUkboAm8lZMrZs9X2LO9j8dP3kw=">AAAB7XicbVBNS8NAEJ3Ur1q/qh69BIvgqSRS0WPRi8cKthbaUDabTbt2sxt2J4VS+h+8eFDEq//Hm//GbZuDtj4YeLw3w8y8MBXcoOd9O4W19Y3NreJ2aWd3b/+gfHjUMirTlDWpEkq3Q2KY4JI1kaNg7VQzkoSCPYbD25n/OGLacCUfcJyyICF9yWNOCVqp1R1FCk2vXPGq3hzuKvFzUoEcjV75qxspmiVMIhXEmI7vpRhMiEZOBZuWuplhKaFD0mcdSyVJmAkm82un7plVIjdW2pZEd67+npiQxJhxEtrOhODALHsz8T+vk2F8HUy4TDNkki4WxZlwUbmz192Ia0ZRjC0hVHN7q0sHRBOKNqCSDcFffnmVtC6qfq16eV+r1G/yOIpwAqdwDj5cQR3uoAFNoPAEz/AKb45yXpx352PRWnDymWP4A+fzB84Hj0o=</latexit>
D
<latexit sha1_base64="eoi8qSffZ+Lse/QIaCPKmkivc2k=">AAAB8nicbVDLSgMxFL1TX7W+qi7dBIvgqsxIRZdFXbisYB8wHUomzbShmWRIMkIZ+hluXCji1q9x59+YaWehrQcCh3PuJeeeMOFMG9f9dkpr6xubW+Xtys7u3v5B9fCoo2WqCG0TyaXqhVhTzgRtG2Y47SWK4jjktBtObnO/+0SVZlI8mmlCgxiPBIsYwcZKfj/GZkwwz+5mg2rNrbtzoFXiFaQGBVqD6ld/KEkaU2EIx1r7npuYIMPKMMLprNJPNU0wmeAR9S0VOKY6yOaRZ+jMKkMUSWWfMGiu/t7IcKz1NA7tZB5RL3u5+J/npya6DjImktRQQRYfRSlHRqL8fjRkihLDp5ZgopjNisgYK0yMbaliS/CWT14lnYu616hfPjRqzZuijjKcwCmcgwdX0IR7aEEbCEh4hld4c4zz4rw7H4vRklPsHMMfOJ8/d3WRYg==</latexit>
h
<latexit sha1_base64="o0Em+1LrPCqDYpYwPOqewD9gKuI=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoseiF48t2FpoQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgpr6xubW8Xt0s7u3v5B+fCoreNUMWyxWMSqE1CNgktsGW4EdhKFNAoEPgTj25n/8IRK81jem0mCfkSHkoecUWOl5qhfrrhVdw6ySrycVCBHo1/+6g1ilkYoDRNU667nJsbPqDKcCZyWeqnGhLIxHWLXUkkj1H42P3RKzqwyIGGsbElD5urviYxGWk+iwHZG1Iz0sjcT//O6qQmv/YzLJDUo2WJRmApiYjL7mgy4QmbExBLKFLe3EjaiijJjsynZELzll1dJ+6Lq1aqXzVqlfpPHUYQTOIVz8OAK6nAHDWgBA4RneIU359F5cd6dj0VrwclnjuEPnM8f0JGM9A==</latexit>
zM=(x,uM)
<latexit sha1_base64="lVUczCqV7cTS71BrtCot7uN1uOI=">AAACEnicbVDLSsNAFJ34rPUVdelmsAgtSEmkohuh6MaNUME+oA1hMp20QyeTMDMRa8g3uPFX3LhQxK0rd/6N0zYFbT0wcOace7n3Hi9iVCrL+jYWFpeWV1Zza/n1jc2tbXNntyHDWGBSxyELRctDkjDKSV1RxUgrEgQFHiNNb3A58pt3REga8ls1jIgToB6nPsVIack1S50Aqb7nJw+pew3PYXH6v0+P4JTH2iu5ZsEqW2PAeWJnpAAy1Fzzq9MNcRwQrjBDUrZtK1JOgoSimJE034kliRAeoB5pa8pRQKSTjE9K4aFWutAPhX5cwbH6uyNBgZTDwNOVoyXlrDcS//PasfLPnITyKFaE48kgP2ZQhXCUD+xSQbBiQ00QFlTvCnEfCYSVTjGvQ7BnT54njeOyXSmf3FQK1YssjhzYBwegCGxwCqrgCtRAHWDwCJ7BK3gznowX4934mJQuGFnPHvgD4/MHpyKdeA==</latexit>
z1=(x,u1)
<latexit sha1_base64="aCZB+ScQlpsfAglc62m9Bshjo5E=">AAACEnicbVDLSsNAFJ3UV62vqEs3g0VoQUoiFd0IRTcuK9gHtCFMppN26GQSZiZiDf0GN/6KGxeKuHXlzr9x0qagrQcGzpxzL/fe40WMSmVZ30ZuaXlldS2/XtjY3NreMXf3mjKMBSYNHLJQtD0kCaOcNBRVjLQjQVDgMdLyhlep37ojQtKQ36pRRJwA9Tn1KUZKS65Z7gZIDTw/eRi7NryApdn/fnwMZzzWXtk1i1bFmgAuEjsjRZCh7ppf3V6I44BwhRmSsmNbkXISJBTFjIwL3ViSCOEh6pOOphwFRDrJ5KQxPNJKD/qh0I8rOFF/dyQokHIUeLoyXVLOe6n4n9eJlX/uJJRHsSIcTwf5MYMqhGk+sEcFwYqNNEFYUL0rxAMkEFY6xYIOwZ4/eZE0Typ2tXJ6Uy3WLrM48uAAHIISsMEZqIFrUAcNgMEjeAav4M14Ml6Md+NjWpozsp598AfG5w9O+p1A</latexit>
ˆ
k
<latexit sha1_base64="XJLkTkJI+j8H16+4puB9IdWNYHM=">AAACFHicbVC7TsMwFHXKq5RXgIGBxaJCYqoSVARjBQtjkehDaqLKcZ3WqmNHtgNUUX6DH2CFP2BDrOz8AN+Bm2agLUeydHTOvT5XJ4gZVdpxvq3Syura+kZ5s7K1vbO7Z+8ftJVIJCYtLJiQ3QApwignLU01I91YEhQFjHSC8c3U7zwQqajg93oSEz9CQ05DipE2Ut8+Sr38kzSQ4pFn0BshnY6zrG9XnZqTAy4TtyBVUKDZt3+8gcBJRLjGDCnVc51Y+ymSmmJGsoqXKBIjPEZD0jOUo4goP82zM3hqlAEMhTSPa5irfzdSFCk1iQIzGSE9UoveVPzP6yU6vPJTyuNEE45nQWHCoBZw2gYcUEmwZhNDEJbU3ArxCEmEtelsLuVpdmrFFOMu1rBM2uc1t167uKtXG9dFRWVwDE7AGXDBJWiAW9AELYBBBl7AK3iznq1368P6nI2WrGLnEMzB+voF1Bifzg==</latexit>
kCategorical(p)
<latexit sha1_base64="7u5NtkqOJz5ZsAsCCyyLFH0MH8E=">AAACHHicbVDLSgMxFM34rPVVdekmWATdlBmt6LLYjcsKVoXOUDLpnRqaeZDcEcswH+LGX3HjQhE3LgT/xkw7C18HAodzzk1ujp9IodG2P62Z2bn5hcXKUnV5ZXVtvbaxeanjVHHo8ljG6tpnGqSIoIsCJVwnCljoS7jyR+3Cv7oFpUUcXeA4AS9kw0gEgjM0Ur92mLmTSzIFg5yOcupqEVIX4Q6zNkMYxspkZb7nhgxv/CBL8v1+rW437AnoX+KUpE5KdPq1d3cQ8zSECLlkWvccO0EvYwoFl5BX3VRDwviIDaFnaMRC0F422Sunu0YZ0CBW5kRIJ+r3iYyFWo9D3ySLFfVvrxD/83opBideJqIkRYj49KEglRRjWjRFB0IBRzk2hHElzK6U3zDFOJo+q6YE5/eX/5LLg4bTbBydN+ut07KOCtkmO2SPOOSYtMgZ6ZAu4eSePJJn8mI9WE/Wq/U2jc5Y5cwW+QHr4wtXC6LF</latexit>
M(D[{zk})
<latexit sha1_base64="P5J01AkK5RzRkO+SgSpfNGQoHmg=">AAACJXicbVDLSgMxFM3UV62vqks3wSLUTZmRii5cFHXhRqhgH9AZhkyaaUMzD5KMUMP8jBt/xY0Liwiu/BXT6SDaeiBwOOdezs3xYkaFNM1Po7C0vLK6VlwvbWxube+Ud/faIko4Ji0csYh3PSQIoyFpSSoZ6cacoMBjpOONrqZ+54FwQaPwXo5j4gRoEFKfYiS15JYv7ADJIUZM3abVH36dQhsnMbRVJnm+ekxdZWdxipN+CkepnR675YpZMzPARWLlpAJyNN3yxO5HOAlIKDFDQvQsM5aOQlxSzEhashNBYoRHaEB6moYoIMJRWWwKj7TSh37E9QslzNTfGwoFQowDT09Ojxbz3lT8z+sl0j93FA3jRJIQz4L8hEEZwWllsE85wZKNNUGYU30rxEPEEZa62JIuwZr/8iJpn9Sseu30rl5pXOZ1FMEBOARVYIEz0AA3oAlaAIMn8ALewMR4Nl6Nd+NjNlow8p198AfG1zfHYaat</latexit>
Figure 1: Illustration of the data reconstruction attack game. The target data takes on
M
possible values
u1,...,uM
with
conditional probabilities
P(um|x) = pm
. The game begins with
k
drawn from
Categorical(p)
, and the private learner
trains a model
h← M(D ∪ {zk})
. The adversary guesses
ˆ
k
and wins if
ˆ
k=k
. Advantage is defined so that
Adv 1
and
guessing
ˆ
k= arg maxmpm
achieves zero advantage. Processes inside the box marked with are unobserved by the
adversary, while everything else is observable.
2. Background and Motivation
Privacy attacks. The machine learning pipeline exposes
training samples to the outside world through the training
procedure and/or trained model. Prior works showed that
adversaries can exploit this exposure to compromise the
privacy of training samples. The most well-studied type of
privacy attack is membership inference attack (Shokri et al.,
2017;Salem et al.,2018;Yeom et al.,2018;Sablayrolles
et al.,2019), which aims to infer whether a sample
z
cor-
responding to an individual’s data was part of the model’s
training set. This membership status can be a very sensitive
attribute, e.g., whether or not an individual participated in a
cancer study indicates their disease status. Most state-of-the-
art attacks (Ye et al.,2021;Watson et al.,2021;Carlini et al.,
2022) follow a common strategy of measuring the model’s
loss compared to that of a random model trained without
the target sample
z
, with a large difference indicating that
the sample was seen during training (i.e., a member).
Other privacy attacks can extract more detailed informa-
tion about a training sample beyond membership status. At-
tribute inference attacks (Fredrikson et al.,2014;Yeom et al.,
2018) aim to reconstruct a training sample given access to
the trained model and partial knowledge of the sample. Data
reconstruction attacks (Fredrikson et al.,2015;Carlini et al.,
2019;2021;Balle et al.,2022) relax the partial knowledge
assumption of attribute inference attacks and can recover
training samples given only the trained model. In federated
learning (McMahan et al.,2017), adversaries that observe
the gradient updates can reconstruct private training samples
using a process called gradient inversion attack (Zhu et al.,
2019;Geiping et al.,2020). The existence of these privacy
attacks calls for countermeasures that can preserve the util-
ity of ML models while preventing unintended leakage of
private information.
Differential privacy (Dwork et al.,2006) is a mathematical
definition of privacy that upper bounds the amount of infor-
mation leakage through a private mechanism. In the context
of ML, the private mechanism
M
is a learning algorithm
that, given any pair of datasets
D
and
D
that differ in a
single training sample, ascertains that
M(D)
and
M(D)
are ϵ-close in distribution for some chosen privacy parame-
ter
ϵ > 0
. In the classical definition of differential privacy,
ϵ
-closeness is defined in terms of max divergence:
M
is
ϵ-differentially private (denoted ϵ-DP) if:
D(M(D)||M(D)) :=
sup
O
[log P(M(D)O)log P(M(D)O)] ϵ,
where
O
denotes a subset of the model space. One variant
of DP that uses R
´
enyi divergence to quantify closeness is
R
´
enyi DP (RDP; Mironov (2017)). For a given
α1
, we
say that Mis (α, ϵ)-RDP if:
Dα(M(D)||M(D))
:= 1
α1log Eh∼M(D)P(M(D))α
P(M(D))αϵ.
Notably, as α→ ∞,(α, ϵ)-RDP coincides with ϵ-DP.
Semantic privacy. Differential privacy has been shown to
effectively prevent all of the aforementioned privacy attacks
2
Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis Testing: A Lesson From Fano
when the privacy parameter
ϵ
is small enough (Jayaraman &
Evans,2019;Zhu et al.,2019;Carlini et al.,2019;Hannun
et al.,2021). Conceptually, when
ϵ0
, the learning algo-
rithm outputs roughly the same distribution of models when
a single training sample
z
is removed/replaced, hence an
adversary cannot accurately infer any private information
about
z
. However, this reasoning does not quantify how
small
ϵ
needs to be to prevent a certain class of privacy
attacks to a certain degree. In practice, this form of se-
mantic guarantee is arguably more meaningful, as it may
inform policy decisions regarding the suitable range of
ϵ
to
provide sufficient privacy protection, and enables privacy
auditing (Jagielski et al.,2020) to verify that the learning
algorithm’s implementation is compliant.
Several existing works made partial progress towards an-
swering this question. Yeom et al. (2018) formalized mem-
bership inference attacks by defining a game between the
private learner and an adversary, and showed that the ad-
versary’s advantage—how well the adversary can infer
a particular sample’s membership status—is bounded by
eϵ1
when the learning algorithm is
ϵ
-DP. This bound
has been tightened significantly in subsequent works (Er-
lingsson et al.,2019;Humphries et al.,2020;Mahloujifar
et al.,2022;Thudi et al.,2022). Similarly, Bhowmick et al.
(2018); Balle et al. (2022); Guo et al. (2022) formalized data
reconstruction attacks and showed that for DP learning algo-
rithm, the adversary’s expected reconstruction error can be
lower bounded using DP, R
´
enyi-DP (Mironov,2017), and
Fisher information leakage (Hannun et al.,2021) privacy ac-
counting. Our work makes further progress in this direction
by analyzing data reconstruction attacks using tools from
the multiple hypothesis testing literature, which we show is
well-suited for discrete data.
3. Formalizing Data Reconstruction
To understand the semantic privacy guarantee for DP mech-
anisms against data reconstruction attacks, we first formally
define a data reconstruction game for discrete data. Our
formulation generalizes the membership inference game
in existing literature (Yeom et al.,2018;Humphries et al.,
2020), while specializing the formulation of Balle et al.
(2022) to discrete data.
Data reconstruction game. Let
Dtrain =D ∪ {z}
be
the training set consisting of a public set
D
and a pri-
vate record
z= (x,u)
, where
x
are attributes known to
the adversary and
u
is unknown. Let
M
be the learn-
ing algorithm. We consider a white-box adversary with
full knowledge of the public set
D
and the trained model
h=M(Dtrain)
whose objective is to infer the unknown
attributes
u
. Importantly, we assume that the unknown at-
tribute is discrete (e.g., gender, race, marital status) and
can take on
M
values
u1,...,uM
. For example, the
experiment setting of Carlini et al. (2019) can be stated
as
x= “My social security number is
and
u∈ {0,1,...,9}10 is the SSN number.
The attack game (see Figure 1 for an illustration) begins by
drawing a random index
k
from a categorical distribution
defined by the probability vector
p
and setting the unknown
attribute
u=uk
. The private learner
M
then trains a model
h
with
z=zk= (x,uk)
and gives it to the adversary, who
then outputs a guess
ˆ
k
of the underlying index
k
. Note
that both the random index
k
and any randomness in the
learning algorithm
M
are unobserved by the adversary, but
the learning algorithm itself is known.
Success metric. We generalize the advantage met-
ric (Yeom et al.,2018) used in membership inference attack
games to multiple categories. Here, the (Bayes) optimal
guessing strategy without observing
h
is to simply guess
ˆ
k= arg maxmpm
with success rate
maxmpm
. The prob-
ability of successfully guessing
k
upon observing
h
,i.e.,
P(ˆ
k=k)
, must be at least
maxmpm
in order to mean-
ingfully leverage the private information contained in
h
about
u
. Thus, we define advantage as the (normalized)
difference between
P(ˆ
k=k)
and the baseline success rate
p:= maxmpm,i.e.,
Adv := P(ˆ
k=k)p
1p[0,1].(1)
Interpretation. Our data reconstruction game has the fol-
lowing important implications for privacy semantics.
1. The private attribute
u
is considered leaked if and only if
it is guessed exactly. This is a direct consequence of defining
adversary success as
ˆ
k=k
. For example, if the attribute
is a person’s age, then guessing
ˆ
k= 50
when the ground
truth is
k= 49
should be considered more successful than
when
k= 40
. In such settings, it may be more suitable to
partition the input space into broader categories, e.g., age
ranges 0-9, 10-19, etc., to allow inexact guesses.
2. The attack game subsumes attribute inference attacks.
This can be done by setting the known attribute
x
accord-
ingly. When
x=
, our game corresponds to the scenario
commonly referred to as data reconstruction in existing
literature (Carlini et al.,2019;2021;Balle et al.,2022).
3. Prior information is captured through the sampling prob-
ability
p
.Success rate of the Bayes optimal strategy is
p= maxmpm
, which depends on the sampling proba-
bility vector
p
. In the extreme case where
p
is a delta
distribution on some
k
, which corresponds to the adver-
sary having perfect knowledge of the private attribute
u
,
the model
h
provides no additional information about
u
.
This is in accordance with the “no free lunch theorem” in
3
摘要:

AnalyzingPrivacyLeakageinMachineLearningviaMultipleHypothesisTesting:ALessonFromFanoChuanGuo1AlexandreSablayrolles1MaziarSanjabi1AbstractDifferentialprivacy(DP)isbyfarthemostwidelyacceptedframeworkformitigatingprivacyrisksinmachinelearning.However,exactlyhowsmalltheprivacyparameterϵneedstobetoprotec...

展开>> 收起<<
Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis Testing A Lesson From Fano.pdf

共14页,预览3页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:14 页 大小:1.22MB 格式:PDF 时间:2025-04-30

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 14
客服
关注