Validation of Composite Systems by Discrepancy Propagation David Reeb Kanil Patel Karim Barsim Martin Schiegg Sebastian Gerwinn david.reeb kanil.patel karim.barsim martin.schiegg sebastian.gerwinn

2025-05-06 0 0 982.95KB 21 页 10玖币
侵权投诉
Validation of Composite Systems by Discrepancy Propagation
David Reeb Kanil Patel Karim Barsim Martin Schiegg Sebastian Gerwinn
{david.reeb, kanil.patel, karim.barsim, martin.schiegg, sebastian.gerwinn}
@de.bosch.com
Robert Bosch GmbH, Bosch Center for Artificial Intelligence, 71272 Renningen, Germany
Abstract
Assessing the validity of a real-world system with
respect to given quality criteria is a common yet
costly task in industrial applications due to the
vast number of required real-world tests. Vali-
dating such systems by means of simulation of-
fers a promising and less expensive alternative,
but requires an assessment of the simulation ac-
curacy and therefore end-to-end measurements.
Additionally, covariate shifts between simulations
and actual usage can cause difficulties for estimat-
ing the reliability of such systems. In this work,
we present a validation method that propagates
bounds on distributional discrepancy measures
through a composite system, thereby allowing us
to derive an upper bound on the failure probabil-
ity of the real system from potentially inaccurate
simulations. Each propagation step entails an op-
timization problem, where – for measures such
as maximum mean discrepancy (MMD) – we de-
velop tight convex relaxations based on semidef-
inite programs. We demonstrate that our propa-
gation method yields valid and useful bounds for
composite systems exhibiting a variety of realistic
effects. In particular, we show that the proposed
method can successfully account for data shifts
within the experimental design as well as model
inaccuracies within the simulation.
1 INTRODUCTION
Industrial products cannot be released without a priori en-
suring their validity, i.e. the product must be validated to
work according to its specifications with high probability.
Such validation is essential for safety-critical systems (e.g.
autonomous cars, airplanes, medical machines) or systems
with legal requirements (e.g. limits on output emissions or
power consumption of new vehicle types), see e.g. (Kalra
and Paddock, 2016; Koopman and Wagner, 2016; Belcastro
and Belcastro, 2003). When relying on real-world testing
alone to validate system-wide requirements, one must per-
form enough test runs to guarantee an acceptable failure rate,
e.g. at least
106
runs for a guarantee below
106
. This is
costly not only in terms of money but also in terms of time-
to-release, especially when a failed system test necessitates
further design iterations.
System validation is particularly difficult for complex sys-
tems which typically consist of multiple components, of-
ten developed and tested by different teams under varying
operating conditions. For example, an advanced driver-
assistance system is built from several sensors and con-
trollers, which come from different suppliers but together
must guarantee to keep the vehicle safely on the lane. Sim-
ilarly, the powertrain system of a vehicle consists of the
engine or battery, a controller and various catalysts or aux-
iliary components, but is legally required to produce low
output emissions of various gases or energy consumption
per distance as a whole. In both these examples, the vali-
dation of the system can also be viewed as the validation
of its control component, when the other subsystems are
considered fixed. To reduce the costs of real-world test-
ing including system assembly and release delays, one can
employ simulations of the composite system by combining
models of the components, to perform virtual validation of
the system (Wong et al., 2020).
However, it is difficult to assess how much such a composite
virtual validation can be trusted, because the component
models may be inaccurate w.r.t. the real-world components
(simulation model misfits) or the simulation inputs may dif-
fer from the distribution of real-world inputs (data-shift).
Incorporating these inaccuracies within the virtual valida-
tion analysis is particularly important for reliability analyses
(Bect et al., 2012; Dubourg et al., 2013; Wang et al., 2016)
in industrial applications with safety or legal relevance as
those described above, where falsely judging a system to
be reliable is much more expensive than false negatives.
For this reason, we desire – if not an accurate estimate –
then at least an upper bound on its true failure probability.
Existing validation methods are especially lacking the com-
posite (multi-component) aspect, where measurement data
arXiv:2210.12061v2 [cs.LG] 3 Jan 2024
Real system
Simulation
data-shift (input discrepancy)
model misfit
model misfit
model misfit
Field usage
Simulation input
x
x
p
Component S1
Component S2
Component S3
Model M1
Model M2
Model M3
M(x)
S(x)
q
R1S(x)>τ dS(x)dp(x)Fmax
R1M(x)>τ dM(x)dq(x)
Figure 1: Illustration of our validation task: A real, composite system of interest (top) is modeled with corresponding
simulation models (bottom). Measurements of the real system are available only for the individual components, while
end-to-end simulation data can be generated from the models. The task of the virtual validation method is to estimate the real
system performance
S
based on the simulations
M
, incorporating simulation model misfits w.r.t. the real-world components
as well as any data-shift between the simulation input distribution and the field usage to be expected in the real system.
are available only for each individual component (Sec. 2).
To state the problem mathematically, the goal of this work
is to estimate an upper bound
Fmax
on the failure probability
PrS(x)> τ
of a system over real-world inputs
xp(x)
:
Fmax Pr
x,SS(x)> τ=Z1S(x)dS(x)dp(x),(1)
where
S(x)
measures the system performance upon input
x
, and
τ
is a critical performance threshold indicating a
system failure. In the virtual validation setup, we assume
that no end-to-end measurements from the full composite
system
S
are available, and thus the upper bound
Fmax
is to
be estimated from the simulation
M
composed of models
M1, M2, . . .
, which are assumed to be given. This estimate
must take into account model misfits and data-shift in the
simulation input distribution (see Fig. 1). To assess the
model misfits, we assume validation measurements from
the individual components
S1, S2, . . .
to be given, e.g. from
component-wise development (for details see Sec. 3.1).
In this paper, we develop a method to estimate
Fmax
from
simulation runs by propagating bounds on distributional
distances between simulation models and real-world com-
ponents through the composite system. This propagation
method incorporates model misfits and data-shifts in a pes-
simistic fashion by iteratively maximizing for the worst-case
output distribution that is consistent with previously com-
puted constraints on the input. Importantly, our method
requires models and validation data from the individual
components only, not from the full system.
Our main contributions can be summarized as follows:
1.
We propose a novel, distribution-free bound on the
distance between simulation-based and real-world dis-
tributions, without the need to have end-to-end mea-
surements from the real world (Sec. 3.2).
2.
We justify the method theoretically (Prop. 1) and show
its practicality in reliability benchmarks (Sec. 4.2).
3.
We demonstrate that – in contrast to alternative meth-
ods – the proposed method can account for data-shifts
as well as model inaccuracies (Sec. 4).
2 RELATED WORK
Estimating the failure probability of a system is a core task
in reliability engineering. In the reliability literature, one fo-
cus is on making this estimation more efficient compared to
naive Monte Carlo sampling by reducing the variance on the
estimator of the failure probability. Such classical methods
include importance sampling (Rubinstein and Kroese, 2004),
subset sampling (Au and Beck, 2001), line sampling (Pradl-
warter et al., 2007), and first-order (Hohenbichler et al.,
1987; Du and Hu, 2012; Zhang et al., 2015) or second-order
(Kiureghian and Stefano, 1991; Lee et al., 2012) Taylor ex-
pansions. While being more efficient, they still require a
large number of end-to-end function evaluations and cannot
incorporate more detailed simulations.
Another line of research investigates how to reduce real-
world function evaluations through virtualization of this
performance estimation task (Xu and Saleh, 2021). The
failure probability is estimated based on a surrogate model
and hence cannot account for mismatches between the sys-
tem and its surrogate. Dubourg et al. (2013) proposed a
hybrid approach, where the proposal distribution of the im-
portance sampling depends on the learned surrogate model.
While this approach accounts, to some extent, for model mis-
matches, the proposal distribution might still be biased by a
poor surrogate model. In summary, none of the approaches
that are based on surrogate models provide a reliable bound
on the true failure probability. Furthermore, all these ap-
proaches require end-to-end measurements from the real
system, ignoring the composite structure of the system.
In practice, however, the system output
S(·)
in Eq.
(1)
refers
to a complex system that often has a composite structure.
That is, global inputs
x
propagate through an arrangement,
oftentimes termed a function network, of subsystems or com-
ponents, see Fig. 1. Exploiting such a structure is expected
to have a notable impact on the target task, be it experi-
mental design (Marque-Pucheu et al., 2019), calibration and
optimization (Astudillo and Frazier, 2019, 2021; Kusakawa
et al., 2022; Xiao et al., 2022), uncertainty quantification
(Sanson et al., 2019), or system validation as presented here.
In the context of Bayesian Optimization (BO), for example,
Astudillo and Frazier (2021) construct a surrogate system
of Gaussian Processes (GP) that mirrors the compositional
structure of the system. Similarly, Sanson et al. (2019)
discuss similarities of such structured surrogate models to
Deep GPs (Damianou and Lawrence, 2013), and extend this
framework to local evaluations of constituent components.
However, learning (probabilistic) models of inaccuracies
(Sanson et al., 2019; Riedmaier et al., 2021) introduces
further modeling assumptions and cannot account for data-
shifts. Instead, we aim at model-free worst-case statements.
Marque-Pucheu et al. (2019) showed that a composite func-
tion can be efficiently modeled from local evaluations of
constituent components in a sequential design approach.
Friedman et al. (2021) extend this framework to cyclic struc-
tures of composite systems for adaptive experimental design.
They derive bounds on the simulation error in composite sys-
tems, although assuming knowledge of Lipschitz constants
as well as uniformly bounded component-wise errors.
Stitching different datasets covering the different parts of a
larger mechanism without loosing the causal relation was
analyzed by Chau et al. (2021) and corresponding models
were constructed, but the quality with which statements
about the real mechanism can be made was not analyzed.
Bounding the test error of models under input-datashift was
analyzed empirically in Jiang et al. (2022) by investigating
the disagreement between different models. Although they
find a correlation between disagreement and test error, the
authors do not provide a rigorous bound on the test error
(Sec. 3.3) and also cannot incorporate an existing simulation
model into the analysis.
3 METHOD
3.1 Setup: Composite System Validation
We consider a (real) system or system under test
S
that is
composed of subsystems
Sc
(
c= 1,2, . . . , C
), over which
we have only limited information. The validation task is to
determine whether
S
conforms to a given specification, such
as whether the system output
y=S(x)
stays below a given
threshold
τ
for typical inputs
x
– or whether the system’s
probability of failure, defined as violating the threshold, is
sufficiently low, see Eq. (1). Our approach to this task is
built on a model
M
(typically a simulation, with no analytic
form) of
S
that is similarly composed of corresponding sub-
models
Mc
. The main challenge in assessing the system’s
failure probability lies in determining how closely
M
ap-
proximates
S
, in the case where the system data originate
from disparate component measurements, which cannot be
combined to consistent end-to-end data.
Components and signals. Mathematically, each compo-
nent of
S
– and similarly for
M
– is a (potentially stochastic)
map
Sc
, which upon input of a signal
xc
produces an output
signal (sample) ycSc(·|xc)according to the conditional
distribution
Sc
. The stochasticity allows for aleatoric sys-
tem behavior or unmodeled influences. We consider the case
where all signals are tuples
xc= (xc
1, . . . , xc
dc
in )
, such as
real vectors. The allowed “compositions” of the subsystems
Sc
must be such that upon input of any signal (stimulus)
x
,
an output sample
yS(·|x)
can be produced by iterating
through the components
Sc
in order
c= 1,2, . . . , C
. More
precisely, we assume that the input signal
xc
into
Sc
is a con-
catenation of some entries
x|0c
of the overall input tuple
x
and entries
yc|cc
of some preceding outputs
yc
(with
c= 1, . . . , c 1
); thus,
Sc
is ready to be queried right after
Sc1
. We assume the overall system output
y=yCR
to
be real-valued as multiple technical performance indicators
(TPIs) could be considered separately or concatenated by
weighted mean, etc. The simplest example of such a com-
posite system is a linear chain
S=SC. . .S2S1
, where
xx1
is the input into
S1
and the output of each compo-
nent is fed into the next, i.e.
xc+1 yc
. Another example is
shown in Fig. 1, where
x3
is concatenated from both outputs
y1
and
y2
. We assume the identical compositional structure
for the model Mwith components Mc.
Validation data. An essential characteristic of our setup is
that neither
S
nor the subsystem maps
Sc
are known explic-
itly, and that “end-to-end” measurements
(x, y)
from the full
system
S
are unavailable (see Sec. 1). Rather, we assume
that validation data are available only for every subsystem
Sc
, i.e. pairs
(xc
v, yc
v)
of inputs
xc
v
and corresponding output
samples
yc
vSc(·|xc
v)
(
v= 1, . . . , V c
). Such validation
data may have been obtained by measuring subsystem
Sc
in isolation on some inputs
xc
v
, without needing the full
system
S
; note, the inputs
xc
v
do not necessarily follow the
distribution from previous components. In the same spirit,
the models
Mc
may also have been trained from such “local”
system data; we assume Mc,Mto be given from the start.
Probability distributions. We aim at probabilistic vali-
dation statements, namely that the system fails or violates
its requirements only with low probability. For this, we
assume that
S
is repeatedly operated in a situation where
its inputs come from a distribution
xpx
, in an i.i.d.
fashion. For the example where
S
is a car, the input
x
might be a route that typical drivers take in a given city.
Importantly, we do not assume much knowledge about
px
:
merely a number of samples
xvpx
may be given, or
alternatively its distance to the simulation input distribution
qx=1
nMPnM
n=1 δxM
n
; here,
δxM
n
are point measures on the
input signals
xM
n
on which
M
is being simulated. The input
distribution
xpx
will induce a (joint) distribution
p
of
all intermediate signals
xc, yc
of the composite system
S
and importantly the TPI output
y=yCS(x)
. Similarly,
M
generates a joint model distribution
q
by starting from
qx
and sampling through all
Mc
; via this simulation, we
assume
q
and all its marginals on intermediate signals
xc, yc
to be available in sample-based form. The (true) failure
probability is given by
pfail =R1SdS(x)dp(x)
, where
in this paper we identify a system failure as the TPI exceed-
ing the given threshold
τ
. The model failure probability
is
qfail =R1MdM(x)dq(x)1
nMPn1yM
n
, where
yM
n
denote sampled model TPI outputs for the inputs
xM
n
.
It is often useful in our setting to think of a distribution as a
set of sample points, and vice versa.
Discrepancies. To track how far the simulation model
M
diverges from the true system behavior
S
in our probabilistic
setting, we employ discrepancy measures
D
between proba-
bility distributions. Such a measure
D
maps two probability
distributions
p, q
over the same space to a real number, often
having some interpretation of distance. We consider MMD
distances
D= MMDk
(Gretton et al., 2012), defined as the
RKHS norm
MMDk(p, q) = pqk=Rx,x(p(x)
q(x))k(x, x)(p(x)q(x))dxdx1/2
w.r.t. a kernel
k
on the underlying space (e.g. a squared-exponential or
IMQ kernel (Gorham and Mackey, 2017)). Further pos-
sibilities include the cosine similarity
COSk(p, q) =
p, qk/pkqk
w.r.t. a kernel
k
, a Wasserstein distance
D=Wp
w.r.t. a metric on the space, and the total variation
norm
D=T V
(Sriperumbudur et al., 2009, 2010); however,
the latter cannot be estimated reliably from samples.
Specifically, we assume a discrepancy measure
Dcc
to
be given
1
for those pairs
0c< c C+ 1
for which (a
sub-tuple of) the output signal
yc
is fed into the input
xc
(cf. the compositional structure above, and where we define
yc=0 x
and
xc=C+1 y:= yC
). This
Dcc
acts on
probability distributions over the space of such sub-tuples
like
yc|cc
(or synonymously,
xc|cc
), which is defined
as the signal entries running from
yc
to
xc
. We denote the
marginal of
p
on these signal entries by
p|cc
, and similar
q|cc
for
q
. In the simplest case of a linear chain,
Dcc
with
c=c1
acts on probability distributions such as
p|cc
over the space of the (full) vectors
yc=xc
. We
1
We will later address how to choose
D
from a parameterized
family D, e.g. with different lengthscales .
omit superscripts
DccD
when clear from the context.
Our method requires (upper bounds on) the discrepan-
cies
D(p|0c, q|0c)
between marginals of the system and
model input distributions
px, qx
; specifically between the
marginal distributions
p|0c
and
q|0c
over those sub-
tuples
x|0c
which are input to subsequent components
c
. These discrepancies can either be estimated from samples
xv, xM
n
of
px, qx
, see the biased and unbiased estimates for
MMD in Gretton et al. (2012)[App. A.2, A.3], which are ac-
curate up to at most
p(1/nmin) log(1)
at confidence
level
1δ
(where
nmin
denotes the size of the smaller of
both sample sets); alternatively, these discrepancies may be
directly given or upper bounded. These upper bounds are
the quantities
B0c
below in Eq. (2). No further knowledge
of the real-world input distribution pxis required.
3.2 Discrepancy Propagation Method
We now describe the key step in our method to quantify how
closely the model’s TPI output distribution, which we denote
by
qyq|CC+1
, approximates the actual (but unknown)
system output distribution
pyp|CC+1
. We do this
by iteratively propagating worst-case discrepancy values
through the (directed and acyclic) graph of components
Sc
/
Mc
, using only the available information, in particular
the given validation data
(xc
v, yc
v)
on a per-subsystem basis.
Discrepancy bound propagation. The basic idea is to
go through the components
c= 1,2, . . . , C
one-by-one.
At each step
Sc
, we consider the “input discrepancies”
D(p|cc, q|cc)
(for
c< c
), about which we already
have information, and propagate this to gain information
about the “output discrepancies”
D(pcc′′ , q|cc′′ )
(for
c′′ > c
). Here, we consider “information” in the form
of inequalities
D(p|cc, q|cc)Bcc
, i.e. the infor-
mation is the value of the (upper) bound
Bcc
. Given
bounds
Bcc
on the input signal of
Sc
, an upper bound
on
D(p|cc′′ , q|cc′′ )
for each fixed
c′′ > c
can be found
by maximizing the latter discrepancy over all (unknown)
distributions
p
that satisfy all the input discrepancy bounds:
Bcc′′ =maximizepD(p|cc′′ , q|cc′′ )(2)
subject to D(p|cc, q|cc)Bccc< c.
Note that the (sample-based) model distribution
q
and its
marginals in (2) are known and fixed after the simulation
M
has been run on the input samples
xM
n
which constitute
qx
(see above). In contrast, as the actual
p
is not known,
we maximize over all possible system distributions pin (2)
according to the bounds from the previous components c.
It remains to optimize over all possible sets of marginals
p|cc′′ , p|cc
(for all
c< c
) occurring in (2). Ideally, one
would consider all distributions
p(xc)
over input signals
xc
,
apply
Sc
to each
xc
to obtain all possible joint distributions
p(xc, yc) = p(xc)Sc(yc|xc)
of in- and outputs, and com-
pute from this all possible sets of marginals
p|cc′′ , p|cc
.
Figure 2: Illustration of the (marginals of the) joint input-
output distribution
pα
(3), parameterized by weights
αv
.
Corresponding in-/outputs
xv, yv
have the same weight
αv
.
However, this is impossible as we do not know the action
of
Sc
on every possible input
xc
. Rather, we merely know
about the action of
Sc
on the validation inputs
xc
v
, namely
that
yc
vSc(xc
v)
is a corresponding output sample. We
thus consider only the joint distributions
p(xc, yc) = pα
that can be formed from the given validation data (Fig. 2)2:
pα=
Vc
X
v=1
αvδxc
vδyc
v,(3)
such that the optimization variable becomes now a proba-
bility vector
αRVc
, i.e. with nonnegative entries
αv0
summing to
Pvαv= 1
. By restricting to this (potentially
skewed) set of joint distributions
pα
, the exact bound
Bcc′′
turns into an estimate; for further discussion see Prop. 1,
which also proposes another possible parametrization pα.
Using ansatz (3), the exact bound propagation (2) becomes:
Bcc′′ =maxαD(pα|cc′′ , q|cc′′ )(4)
s.t. D(pα|cc, q|cc)Bccc< c,
α0,X
v
αv= 1.
Note that for sample-based distributions like pαin (3) or q,
the marginals in this optimization have a similar form, e.g.
pα|cc′′ =Pvαvδyc
v|cc′′ or pα|cc=Pvαvδxc
v|cc.
As the discrepancy measures
D
in (4) are usually convex,
this optimization problem is “almost” convex: All its con-
straints are convex, however, we aim to maximize a convex
objective. For MMD measures we derive convex (semidefi-
nite) relaxations of (4) by rewriting it with squared MMDs
D(pα, q)2
, which are quadratic in
α
and thus linear in a
new matrix variable
A=ααT
; this last equality is then
relaxed to the semidefinite inequality
AααT
(App. A).
While the relaxation is tight in most instances (App. E.4),
the number of variables increases from
Vc
to
(Vc)2/2
,
restricting the method to
Vc103
validation samples per
component. In our implementation, we solve these SDPs
using the CVXPY package (Diamond and Boyd, 2016).
Bounding the failure probability. The final step of
the preceding bound propagation yields an upper bound
2δz0denotes a Dirac point mass at z=z0.
By:= BCC+1
on the discrepancy
D(py, qy)
between the
(unknown) system TPI output distribution
py
and its model
counterpart
qy
, which is given by samples
yM
n
. We now ap-
ply an idea similar to (3) to obtain (a bound on) the system
failure probability
pfail := Ry>τ py(y)dy
: Rather than maxi-
mizing
pfail
over all distributions
py
on
Ry
subject to the
constraint
By
, we make the optimization finite-dimensional
by selecting grid-points
g1< g2< . . . < gVR
and parameterizing
pypα=PV
v=1 αvδgv
, such that
pfail =Pv:gvαv
. In practice, we choose an equally-
spaced grid in an interval
[gmin, gmax]R
that covers the
“interesting” or “plausible” TPI range, such as the support of
qy
as well as sufficient ranges below and above the threshold
τ. The size of the optimization problem corresponds to the
number of grid-points
V
, so
V103
is easily possible here.
With this, our final upper bound
pfail Fmax
on the failure
probability becomes the following convex program:
Fmax =maxαX
v:gv
αv(5)
s.t. D(pα, qy)By, α 0,X
v
αv= 1.
One can obtain better (i.e. smaller) bounds
Fmax
by re-
stricting
pα
further by plausible assumptions: (a) Mono-
tonicity: When bounding a tail probability, i.e.
pfail
is ex-
pected to be small, it may be reasonable to assume that
py
is monotonically decreasing beyond some tail thresh-
old
τ
. For an equally-spaced grid this adds constraints
αvαv1
for all
v
with
gvτ
to (5); we always as-
sume this with
τ:= τ
. (b) Lipschitz condition: To avoid
that
pα
becomes too “spiky”, we pose a Lipschitz condi-
tion
|αv+1 αv| ≤ Λmax|gv+1 gv|
with a constant
Λmax
estimated from the set of outputs yM
n. See also App. B.
Note that our final bound
Fmax
is a probability, whose inter-
pretation is independent of the chosen discrepancy measures,
kernels, or lengthscales. We can thus select these “parame-
ters” by minimizing the finally obtained
Fmax
over them. We
do this using Bayesian optimization (Fröhlich et al., 2020).
We summarize our full discrepancy propagation method to
obtain a bound
Fmax
on the system’s failure probability in
Algorithm 1, which we refer to as DPBound.
Upper bound property. We replaced the optimization over
all possible system distributions
p
in (2) by the distributions
pα
from (3) due to the limited system validation data and to
make the optimization tractable. This restricted and possibly
skewed
pα
can potentially cause
Bcc′′
and ultimately
Fmax
from (4),(5) to not be true upper bounds on
D
or even the
system’s (unknown) failure probability
pfail
, although the
worst-case tendency of the maximizations alleviates the
issue. We investigate this in the experiments (Sec. 4.2),
and in the following proposition we state conditions under
which (4),(5) are upper bounds:
Proposition 1. Suppose that for each component
c=
摘要:

ValidationofCompositeSystemsbyDiscrepancyPropagationDavidReebKanilPatelKarimBarsimMartinSchieggSebastianGerwinn{david.reeb,kanil.patel,karim.barsim,martin.schiegg,sebastian.gerwinn}@de.bosch.comRobertBoschGmbH,BoschCenterforArtificialIntelligence,71272Renningen,GermanyAbstractAssessingthevalidityofa...

展开>> 收起<<
Validation of Composite Systems by Discrepancy Propagation David Reeb Kanil Patel Karim Barsim Martin Schiegg Sebastian Gerwinn david.reeb kanil.patel karim.barsim martin.schiegg sebastian.gerwinn.pdf

共21页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:21 页 大小:982.95KB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 21
客服
关注