Mitigating Gender Bias in Face Recognition Using the von Mises-Fisher Mixture Model Jean-R emy Conti 1 2Nathan Noiry 1Vincent Despiegel2Stephane Gentric2Stephan Cl emenc on1

2025-05-02 0 0 1.77MB 26 页 10玖币
侵权投诉
Mitigating Gender Bias in Face Recognition
Using the von Mises-Fisher Mixture Model
Jean-R´
emy Conti *12 Nathan Noiry * 1 Vincent Despiegel 2St´
ephane Gentric 2St´
ephan Cl´
emenc¸on 1
Abstract
In spite of the high performance and reliability
of deep learning algorithms in a wide range of
everyday applications, many investigations tend
to show that a lot of models exhibit biases, dis-
criminating against specific subgroups of the pop-
ulation (e.g. gender, ethnicity). This urges the
practitioner to develop fair systems with a uni-
form/comparable performance across sensitive
groups. In this work, we investigate the gender
bias of deep Face Recognition networks. In or-
der to measure this bias, we introduce two new
metrics,
BFAR
and
BFRR
, that better reflect
the inherent deployment needs of Face Recog-
nition systems. Motivated by geometric consid-
erations, we mitigate gender bias through a new
post-processing methodology which transforms
the deep embeddings of a pre-trained model to
give more representation power to discriminated
subgroups. It consists in training a shallow neural
network by minimizing a Fair von Mises-Fisher
loss whose hyperparameters account for the intra-
class variance of each gender. Interestingly, we
empirically observe that these hyperparameters
are correlated with our fairness metrics. In fact,
extensive numerical experiments on a variety of
datasets show that a careful selection significantly
reduces gender bias. The code used for the ex-
periments can be found at
https://github.
com/JRConti/EthicalModule_vMF.
1. Introduction
In the past few years, Face Recognition (FR) systems have
reached extremely high levels of performance, paving the
*
Equal contribution
1
LTCI, T
´
el
´
ecom Paris, Institut Poly-
technique de Paris
2
Idemia. Correspondence to: Jean-
R
´
emy Conti
<
jean-remy.conti@telecom-paris.fr
>
, Nathan Noiry
<nathan.noiry@gmail.com>.
Proceedings of the
39 th
International Conference on Machine
Learning, Baltimore, Maryland, USA, PMLR 162, 2022. Copy-
right 2022 by the author(s).
way to a broader range of applications, where the reliability
levels were previously prohibitive to consider automation.
This is mainly due to the adoption of deep learning tech-
niques in computer vision since the famous breakthrough
of (Krizhevsky et al.,2012). The increasing use of deep FR
systems has however raised concerns as any technological
flaw could have a strong societal impact. Besides recent
punctual events
1
that received significant media coverage,
the academic community has studied the bias of FR systems
for many years (dating back at least to (Phillips et al.,2003)
who investigated the racial bias of non-deep FR algorithms).
In (Abdurrahim et al.,2018) three sources of biases are
identified: race (understood as biological attributes such as
skin color), age and gender (available gender labels from FR
datasets are males and females). The National Institute of
Standards and Technology (Grother et al.,2019) conducted
a thorough analysis of the performance of several FR al-
gorithms depending on these attributes and revealed high
disparities. For instance, some of the top state-of-the-art
algorithms in absolute performance have more than seven
times more false acceptances for females than for males. In
this paper, we introduce a novel methodology to mitigate
gender bias for FR. Though focusing on a single source of
bias has obvious limitations regarding intersectional effects
(Buolamwini & Gebru,2018), it is a first step to gain in-
sights into the mechanisms at work, before turning to more
complex situations. Actually, the method promoted in this
paper, much more general than the application considered
here, could possibly alleviate many other types of bias. This
will be the subject of a future work.
The topic corresponding to the study of different types of
bias and to the elaboration of methods to alleviate them is re-
ferred to as fairness in machine learning, which has received
increasing attention in recent years, see e.g. (Mehrabi et al.,
2019), (Caton & Haas,2020), (Du et al.,2020). Roughly
speaking, achieving fairness means learning a decision rule
that does not mistreat some predefined subgroups, while
still exhibiting a good predictive performance on the overall
population: in general, a trade-off has to be found between
1
See for instance the study conducted by the American Civil
Liberties Union.
arXiv:2210.13664v3 [cs.CV] 22 Feb 2024
Mitigating Gender Bias in Face Recognition
fair treatment and pure accuracy
2
. In this regard, one needs
to carefully define what will be the relevant fairness met-
ric. From a theoretical viewpoint, several ones have been
introduced, see e.g. (Garg et al.,2020) or (Castelnovo et al.,
2021) among others, depending on how the concept of eq-
uity of treatment is understood. In practice, these very re-
fined notions can be inadequate, as they ignore specific use
case issues, and one thus needs to adapt them carefully. This
is particularly the case in FR, where high security standards
cannot be negotiated. The goal of this article is twofold:
novel fairness metrics, relevant in FR applications in partic-
ular, are introduced at length and empirically shown to have
room for improvement by means of appropriate/flexible
representation models.
Contribution 1. We propose two new metrics,
BFAR
and
BFRR
, that incorporate the needs for both security and
fairness (see section 2.2). More precisely, the
BFAR
(resp.
BFRR)
metric accounts for the disparity between false ac-
ceptance (resp. rejection) rates between subgroups of inter-
est, computed at an operating point such that each subgroup
has a false acceptance rate lower than a false acceptance
level of reference.
It turns out that state-of-the-art FR networks (e.g. ArcFace
(Deng et al.,2019a)) exhibit poor fairness performance w.r.t.
gender, both in terms of
BFAR
and
BFRR
. Different strate-
gies could be considered to alleviate this gender bias: pre-,
in- and post-processsing methods (Caton & Haas,2020),
depending on whether the practitioner “fairness” interven-
tion occurs before, during or after the training phase. The
first one, pre-processing, is not well suited for FR purposes
as shown in (Albiero et al.,2020), while the second one,
in-processing, has the major drawback to require a full re-
training of a deep neural network. This encouraged us to
design a post-processing method so as to mitigate gender
bias of pre-trained FR models.
In order to improve
BFAR
and
BFRR
disparities, we cru-
cially rely on the geometric structure of the last layer of
state-of-the-art FR neural networks. The latter is a set of
embeddings lying on a hypersphere. Those embeddings
are obtained through two concurrent mechanisms at work
during the learning process: (i) repel images of different
identities and (ii) bring together images of a same identity.
Contribution 2. We set a von Mises-Fisher statistical mix-
ture model on the last layer representation, which corre-
sponds to a mixture of gaussian random variables condi-
tioned to live on the hypersphere. Based on the maximum
likelihood of this model, we introduce a new loss we call
Fair von Mises-Fisher, that we use to supervise the training
of a shallow neural network we call Ethical Module. Taking
2
This dichotomy somewhat simplifies the problem since an
increase in accuracy could also lead to a better treatment of each
subgroup of the population.
the variance parameters as hyperparameters that depend on
the gender, this flexible model is able to capture the two
previously mentioned mechanisms of repulsion / attraction,
which we show are at the origin of the biases in FR. Indeed,
our experiments remarkably exhibit a substantial correla-
tion between these hyperparameters and our fairness metrics
BFAR
and
BFRR
, suggesting a hidden regularity captured
by the model proposed. More precisely, we identify some
regions of hyperparameters’ values that (i) significantly im-
prove
BFAR
while keeping a reasonable performance but
degrading
BFRR
,(ii) significantly improve
BFRR
while
keeping a reasonable performance but degrading
BFAR
and (iii) improve both
BFAR
and
BFRR
at the cost of little
performance degradation. This third case actually achieves
state-of-the-art results in terms of post-processing methods
for gender bias mitigation in FR.
pre-trained model
(frozen)
von Mises-Fisher
Loss
shallow MLP
training set
sensitive
attribute
MS1MV3
gender
ArcFace
size: (512, 1024, 512)
Fair
Figure 1: Illustration of the Ethical Module methodology.
In gray: our experiment choices.
Besides a simple architecture and a fast training (few hours),
the Ethical Module enjoys several benefits we would like to
highlight.
Taking advantage of foundation models. In the recent
survey (Bommasani et al.,2021), the authors judiciously
point out a change of paradigm in deep learning: very ef-
ficient pre-trained models with billions of parameters they
call foundation models are at our disposal such as BERT
(Devlin et al.,2018) in NLP or ArcFace (Deng et al.,2019a)
in FR. Many works rely on these powerful models and fine
tune them, inheriting from both their strengths and weak-
nesses such as their biases. Hence the need to focus on
methods to improve the fairness of foundation models: our
method is in line with this approach.
No sensitive attribute used during deployment. Though
the Ethical Module requires access to the sensitive label
during its training phase, this label (e.g. gender) is not
needed anymore, once the training is completed. This is
compliant with the EU jurisdiction that forbids the use of
protected attributes for prediction rules.
Organization of the paper. Section 2.1 presents the widely
spread usage of FR and its main challenges. It is followed
Mitigating Gender Bias in Face Recognition
by section 2.2 where we discuss different fairness metrics
that arise in FR and introduce two new ones we think are
more relevant with regards to operational use cases. In sec-
tion 3, we present the von Mises-Fisher loss that is used for
the training of the Ethical Module and discuss its benefits.
Finally, in section 4, we present at length our numerical
experiments, which partly consist in learning an Ethical
Module on the ArcFace model, pre-trained on the MS1MV3
dataset (Deng et al.,2019b). Our results show that, remark-
ably, some specific choices of hyperparameters provide high
performance and low fairness metrics both at the same time.
Related works. The correction of bias in FR has been
the subject of several recent papers. (Liu et al.,2019) and
(Wang & Deng,2020) use reinforcement learning to learn
fair decision rules but despite their mathematical relevance,
such methods are computationally prohibitive. Another line
of research followed by (Yin et al.,2019), (Wang et al.,
2019a) and (Huang et al.,2019) assumes that bias comes
from the unbalanced nature of FR datasets and builds on
imbalanced and transfer learning methods. Unfortunately,
these methods do dot completely remove bias and it has
been recently pointed out that balanced dataset are actually
not enough to mitigate bias, as illustrated by (Albiero et al.,
2020) for gender bias, (Gwilliam et al.,2021) for racial
bias and (Wang et al.,2019b) for gender bias in face detec-
tion. (Gong et al.,2019), (Alasadi et al.,2019) and (Dhar
et al.,2021) rely on adversarial methods that can reduce
bias but are also known to be unstable and computationally
expensive. All of the previously mentioned methods try to
learn fair representations. In contrast, some other works
do not affect the latent space but modify the decision rule
instead: (Terh
¨
orst et al.,2020) act on the score function
whereas (Salvador et al.,2021) rely on calibration methods.
Despite encouraging results, these approaches do not solve
the source of the problem which is the bias incurred by the
embeddings used.
2. Fairness in Face Recognition
In this section, we first briefly recall the main principles of
deep Face Recognition and introduce some notations. The
interested reader may consult (Masi et al.,2018) or (Wang
& Deng,2018) for a detailed exposition. Then, we present
the fairness metrics we adopt and argue of their relevance
in our framework.
2.1. Overview of Face Recognition
Framework. A typical FR dataset consists of face im-
ages of individuals from which we wish to predict the
identities. Assuming that the images are of size
h×w
and that there are
K
identities among the images, this
can be modeled by i.i.d. realizations of a random variable
(X, y)Rh×w×c× {1, . . . , K}
, where
c
corresponds to
the color channel dimension. In the following, we denote
by Pthe corresponding probability law.
Objective. The usual goal of FR is to learn an encoder
function
fθ:Rh×w×cRd
that embeds the images in a
way to bring same identities closer together. The resulting
latent representation
Z:= fθ(X)
is the face embedding of
X
. Since the advent of deep learning, the encoder is a deep
Convolutional Neural Network (CNN) whose parameters
θ
are learned on a huge FR dataset
(xi, yi)1iN
made of
N
i.i.d. realizations of the random variables
(X, y)
. There are
generally two FR use cases: identification, which consists
in finding the specific identity of a probe face among several
previously enrolled identities, and verification (which we
focus on throughout this paper), which aims at deciding
whether two face images correspond to the same identity
or not. To do so, the closeness between two embeddings
is usually quantified with the cosine similarity measure
s(zi,zj) := z
izj/(||zi|| · ||zj||)
, where
|| · ||
stands for
the usual Euclidean norm (the Euclidean metric
||zizj||
is also used in some early works e.g. (Schroff et al.,2015)).
Therefore, an operating point
t[1,1]
(threshold of ac-
ceptance) has to be chosen to classify a pair
(zi,zj)
as
genuine (same identity) if
st
and impostor (distinct
identities) otherwise.
Training. For the training phase only, a fully-connected
layer is added on top of the deep embeddings so that the
output is a
K
-dimensional vector, predicting the identity of
each image within the training set. The full model (CNN +
fully-connected layer) is trained as an identity classification
task. Until 2018, most of the popular FR loss functions were
of the form:
L=1
n
n
X
i=1
log eκµ
yizi
PK
k=1 eκµ
kzi!,(1)
where the
µk
s are the fully-connected layer’s parameters,
κ > 0
is the inverse temperature of the softmax function
used in brackets and
n
is the batch size. Early works
(Taigman et al.,2014;Sun et al.,2014) took
κ= 1
and
used a bias term in the fully-connected layer but (Wang
et al.,2017) showed that the bias term degrades the per-
formance of the model. It was thus quickly discarded in
later works. Since the canonical similarity measure at the
test stage is the cosine similarity, the decision rule only
depends on the angle between two embeddings, whereas
it could depend on the norms of
µk
and
zi
during train-
ing. This has led (Wang et al.,2017) and (Hasnat et al.,
2017) to add a normalization step during training and take
µk,ziSd1:= {zRd:||z|| = 1}
as well as introduc-
ing the re-scaling parameter
κ
in Eq. 1: these ideas signifi-
cantly improved upon former models and are now widely
adopted. The hypersphere
Sd1
to which the embeddings
belong is commonly called face hypersphere. Denoting by
θi
the angle between
µyi
and
zi
, the major advance over
Mitigating Gender Bias in Face Recognition
the loss of Eq. 1(with normalization of
µk,zi
) in recent
years was to consider large-margin losses which replace
µ
yizi= cos(θi)
by a function that reduces intra-class an-
gle variations, such as the
cos(i)
of (Liu et al.,2017) or
the
cos(θi)m
of (Wang et al.,2018). The most efficient
choice is
cos(θi+m)
and is due to (Deng et al.,2019a) who
called their model ArcFace, on which we build our method-
ology. A fine training should result in the alignment of each
embedding
zi
with the vector
µyi
. The aim is to bring to-
gether embeddings with the same identity. Indeed, during
the test phase, the learned algorithm will have to decide
whether two face images are related to the same, potentially
unseen, individual (one refers to an open set framework).
Evaluation metrics. Let
(X1, y1)
and
(X2, y2)
be two
independent random variables with law
P
. We distinguish
between the False Acceptance and False Rejection Rates,
respectively defined by
FAR(t) := P(s(Z1, Z2)t|y1̸=y2)
FRR(t) := P(s(Z1, Z2)< t |y1=y2)
These quantities are crucial to evaluate a given algorithm
in our context: Face Recognition is intrinsically linked to
biometric applications, where the usual accuracy evaluation
metric is not sufficient to assess the quality of a learned
decision rule. For instance, security automation in an airport
requires a very low FAR while keeping a reasonable FRR
to ensure a pleasant user experience. As a result, the most
widely used metric consists in first fixing a threshold
t
so
that the
FAR
is equal to a pre-defined value
α[0,1]
,
and then computing the
FRR
at this threshold. We use the
canonical FR notation to denote the resulting quantity:
FRR@(FAR = α) := FRR(t)with FAR(t) = α.
The
FAR
level
α
determines the operational point of the FR
system and corresponds to the security risk one is ready to
take. According to the use case, it is typically set to
10i
with i∈ {1,...,6}.
2.2. Incorporating Fairness
While the
FRR@FAR
metric is the standard choice for
measuring the performance of a FR algorithm, it does not
take into account its variability among different subgroups
of the population. In order to assess and correct for potential
discriminatory biases, the practitioner must rely on suitable
fairness metrics.
Framework. In order to incorporate fairness with respect
to a given discrete sensitive attribute that can take
A > 1
different values, we enrich our previous model and consider
a random variable
(X, y, a)
where
a∈ {0,1, . . . , A 1}
.
With a slight abuse of notations, we still denote by
P
the
corresponding probability law and, for every fixed value
a
,
we can further define
FARa(t) := P(s(Z1, Z2)t|y1̸=y2, a1=a2=a)
FRRa(t) := P(s(Z1, Z2)< t |y1=y2, a1=a2=a).
In our case, we focus on gender bias so we take
A= 2
with
the convention that
a= 0
stands for male,
a= 1
for female.
Existing fairness metrics. Before specifying our choice for
the fairness metric used here, let us review some existing
ones (Mehrabi et al.,2019) that derive from fairness in the
context of binary classification (in FR, one classifies pairs
in two groups: genuines or impostors). The Demographic
Parity criterion requires the prediction to be independent
of the sensitive attribute, which amounts to equalizing the
likelihood of being genuine conditional to
a= 0
and
a= 1
.
Besides heavily depending on the number and quality of
impostors and genuines pairs among subgroups, this crite-
rion does not take into account the
FAR
s and
FRR
s, which
are instrumental in FR as previously mentioned. An at-
tempt to incorporate those criteria could be to compare
the intra-group performances:
FRR0@(FAR0=α)
v.s.
FRR1@(FAR1=α)
. However, the operational points
t0
and
t1
satisfying
FAR0(t0) = α
and
FAR1(t1) = α
generically differ as pointed out by (Krishnapriya et al.,
2020). To fairly assess the equity of an algorithm, one
needs to compare intra-groups
FAR
s and
FRR
s at the same
threshold. Two such criteria exist in the fairness literature:
the Equal Opportunity fairness criterion which requires
FRR0(t) = FRR1(t)
and the Equalized Odds criterion
which additionally requires
FAR0(t) = FAR1(t)
. Never-
theless, working at an arbitrary threshold
t
does not really
make sense since, as previously mentioned, FR systems
typically choose an operational point achieving a predefined
FAR
level so as to limit security breaches. This is why most
current papers consider a fixed operational point
t
such that
the global population False Acceptance Rate equals a fixed
value α. For instance, (Dhar et al.,2021) computes
|FRR1(t)FRR0(t)|with FAR(t) = α. (2)
However, we think that the choice of a threshold achieving
a global
FAR
is not entirely relevant for it depends on the
relative proportions of females and males of the considered
dataset together with the relative proportion of intra-group
impostors. For instance, at fixed images quality, if females
represent a small proportion of the evaluation dataset, the
threshold
t
of Eq. 2is close to the male threshold
t0
satis-
fying
FAR0(t0) = α
and away from the female threshold
t1
satisfying
FAR1(t1) = α
. Such a variability among
datasets could lead to incorrect conclusions.
New fairness metrics. In this paper, we go one step further
and work at a threshold achieving
maxaFARa=α
instead
of
FAR = α
. This alleviates the previous proportion de-
pendence. Besides, this allows to monitor the risk one is
willing to take among each subgroup: for a pre-definite rate
Mitigating Gender Bias in Face Recognition
α
deemed acceptable, one typically would like to compare
the performance among subgroups for a threshold where
each subgroup satisfies
FARaα
. Our two resulting
metrics are thus:
BFRR(α) := maxa∈{0,1}FRRa(t)
mina∈{0,1}FRRa(t)(3)
and
BFAR(α) := maxa∈{0,1}FARa(t)
mina∈{0,1}FARa(t),(4)
where tis taken such that maxa∈{0,1}FARa(t) = α.
One can read the above acronyms “Bias in FRR/FAR”. In
addition to being more security demanding than previous
metrics,
BFRR
and
BFAR
are more amenable to interpreta-
tion: the ratios of
FRR
s or
FAR
s correspond to the number
of times the algorithm makes more mistakes on the discrim-
inated subgroup. Those metrics generalize well for more
than 2distinct values of the sensitive attribute.
3. Geometric Mitigation of Biases
Contrary to a common thinking about the origin of bias,
training a FR model on a balanced training set (i.e. with as
much female identities/images than male identities/images)
is not enough to mitigate gender bias in FR (Albiero et al.,
2020). It is therefore necessary to intervene by designing a
model to counteract the gender bias.
3.1. A Geometrical Embedding View on Fairness
In fact, impostor scores (cosine similarities of impostor
pairs) are higher for females than for males while genuine
scores are lower for females than for males (Grother et al.,
2019;Robinson et al.,2020). This puts females at a disad-
vantage compared to males in terms of both
FAR
and
FRR
.
Typically, this is due to (i) a smaller repulsion between
female identities and/or (ii) a greater intra-class variance
(spread of embeddings of each identity) for female identities,
as illustrated in Figure 2. Thus, we present in the follow-
ing a statistical model which enables to set the intra-class
variance for each identity on the face hypersphere.
3.2. von-Mises Fisher Mixture Model
In order to mitigate the gender bias of deep FR systems, we
set a statistical model on the latent representations of images.
Recall that we assumed that each individual of a FR dataset
is an i.i.d. realization of a random variable
(X, y, a)
, where
X
is the image,
y
the identity and
a
the gender attribute.
Also, recall that, both at the training and the testing stages,
the embeddings are normalized on the hypersphere, meaning
that
Z=fθ(X)Sd1
. As previously mentioned, a fine
learning should result in an alignment of the embeddings
{zi}
of a same identity
yi
around their associated centroid
Figure 2: Illustration of the geometric nature of bias. Each
point is the embedding of an image. In green: two male
identities. In red: two female identities. The overlapping
region between two identities is higher for females than for
males. The grey circles are the acceptance zones, centered
around an embedding of reference, associated to a constant
threshold tof acceptance.
µyiSd1
. It is therefore reasonable to assume that the
embeddings of a same identity are i.i.d. realizations of
a radial distribution of gaussian-type on the hypersphere,
centered at
µyi
. A natural choice is thus to take the so-called
von-Mises Fisher (vMF) distribution which is nothing but
the law of a gaussian conditioned to live in the hypersphere.
Before turning to the formal definition of the statistical
model we put on the hypersphere, let us give the definition
of this vMF distribution.
The von Mises-Fisher distribution. The vMF distribution
in dimension
d
with mean direction
µSd1
and concen-
tration parameter
κ > 0
is a probability measure defined on
the hypersphere Sd1by the following density:
Vd(z;µ, κ) := Cd(κ)eκµz,
with
Cd(κ) = κd
21/((2π)d
2Id
21(κ))
.
Iν
stands for the
modified Bessel function of the first kind at order
ν
, whose
logarithm can be computed with high precision (see supple-
mentary material A.1). The vMF distribution corresponds
to a gaussian distribution in dimension
d
with mean
µ
and
covariance matrix
(1)Id
, conditioned to live on
Sd1
.
Figure 3 illustrates the influence of the concentration param-
eter κon the vMF distribution.
Mixture model. Since the vMF distribution seems to reflect
well the distribution of the embeddings of
1
identity around
their centroid, we extend the model to include all the
K
identities from the training set by considering a mixture
model where each component
k
(
1kK
) is equiproba-
ble and follows a vMF distribution
Vd(z;µk, κk)
.Figure 4
provides an illustration of the mixture model.
Maximum likelihood. Let
N1
and
(xi, yi, ai)1iN
be i.i.d. realizations of
(X, y, a)
. Under the previous vMF
mixture model assumption, the probability
pij
that a face
embedding zi=fθ(xi)belongs to identity jis given by
pij =Vd(zi|µj, κj)
PK
k=1 Vd(zi|µk, κk)=Cd(κj)eκjµ
jzi
PK
k=1 Cd(κk)eκkµ
kzi
.
摘要:

MitigatingGenderBiasinFaceRecognitionUsingthevonMises-FisherMixtureModelJean-R´emyConti*12NathanNoiry*1VincentDespiegel2St´ephaneGentric2St´ephanCl´emenc¸on1AbstractInspiteofthehighperformanceandreliabilityofdeeplearningalgorithmsinawiderangeofeverydayapplications,manyinvestigationstendtoshowthatalo...

展开>> 收起<<
Mitigating Gender Bias in Face Recognition Using the von Mises-Fisher Mixture Model Jean-R emy Conti 1 2Nathan Noiry 1Vincent Despiegel2Stephane Gentric2Stephan Cl emenc on1.pdf

共26页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:26 页 大小:1.77MB 格式:PDF 时间:2025-05-02

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 26
客服
关注