On the Forward Invariance of Neural ODEs

2025-05-02 0 0 1.93MB 25 页 10玖币
侵权投诉
On the Forward Invariance of Neural ODEs
Wei Xiao 1Tsun-Hsuan Wang 1Ramin Hasani 1Mathias Lechner 1
Yutong Ban 1Chuang Gan 2Daniela Rus 1
Abstract
We propose a new method to ensure neural ordi-
nary differential equations (ODEs) satisfy output
specifications by using invariance set propaga-
tion. Our approach uses a class of control barrier
functions to transform output specifications into
constraints on the parameters and inputs of the
learning system. This setup allows us to achieve
output specification guarantees simply by chang-
ing the constrained parameters/inputs both dur-
ing training and inference. Moreover, we demon-
strate that our invariance set propagation through
data-controlled neural ODEs not only maintains
generalization performance but also creates an ad-
ditional degree of robustness by enabling causal
manipulation of the system’s parameters/inputs.
We test our method on a series of representation
learning tasks, including modeling physical dy-
namics and convexity portraits, as well as safe
collision avoidance for autonomous vehicles.
1. Introduction
Neural ODEs (Chen et al.,2018) are continuous deep learn-
ing models that enable a range of useful properties such as
exploiting dynamical systems as an effective learning class
(Haber & Ruthotto,2017;Gu et al.,2021), efficient time
series modeling (Rubanova et al.,2019;Lechner & Hasani,
2022), and tractable generative modeling (Grathwohl et al.,
2018;Liebenwein et al.,2021).
Neural ODEs are typically trained via empirical risk min-
imization (Rumelhart et al.,1986;Pontryagin,2018) en-
dowed with proper regularization schemes (Massaroli et al.,
2020) without much control over the behavior of the ob-
tained network and over the ability to account for coun-
1
Computer Science and Artificial Intelligence Lab, Mas-
sachusetts Institute of Technology, Cambridge, MA, USA.
2
MIT-
IBM Watson AI Lab. Videos and code are available on the
website:
https://weixy21.github.io/invariance/
.
Correspondence to: Wei Xiao <weixy@mit.edu>.
Proceedings of the
40 th
International Conference on Machine
Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright
2023 by the author(s).
Input Output
Input
Invariance
Neural ODE
𝐼(𝑡)
Invariance propagation
Output
spec
What if there is an
obstacle
in
In the flow of a neural ODE?
Neural ODE with no Invariance
Neural ODE + Invariance
AB
C
We use Neural ODEs with Invariance
Figure 1.
Invariance Propagation for neural ODEs. Output specifi-
cations can be guaranteed with invariance, including specification
satisfaction between samplings, e.g., spiral curve regression with
critical region avoidance.
terfactual inputs (Vorbach et al.,2021). For example, a
well-trained neural ODE instance that learned to chase a
spiral dynamic (Fig. 1B), would not be able to avoid an
object on its flow, even if it has seen this type of output
specification/constraint during training. This shortcoming
demands a fundamental fix to ensure the safe operation of
these models specifically in safety-critical applications such
as robust and trustworthy policy learning, safe robot control,
and system verification (Lechner et al.,2020;Kim et al.,
2021;Hasani et al.,2022).
In this paper, we set out to ensure neural ODEs satisfy
output specifications. To this end, we introduce the concept
of propagating invariance sets. An invariance set is a form
of specification consisting of physical laws, mathematical
expressions, safety constraints, and other prior knowledge of
the structure of the learning task. We can ensure that neural
ODEs are invariant to noise and affine transformations such
as rotating, translating, or scaling an input, as well as to
other uncertainties in training and inference.
To propagate invariance sets through neural ODEs we can
use Lyapunov-based methods with forward invariance prop-
erties such as a class of control barrier functions (CBFs)
(Ames et al.,2017), to formally guarantee that the output
1
arXiv:2210.04763v2 [cs.LG] 31 May 2023
On the Forward Invariance of Neural ODEs
specifications are ensured. In order to account for non-
linearity of the model, high-order CBFs (Xiao & Belta,
2019), a general form of CBFs, are required since high-
relative degree constraints are introduced in such cases.
CBFs perform this via migrating output specifications to
the learning system’s parameters or its inputs such that we
can solve the constraints via forward calls to the learning
system equipped with a quadratic program (QP). However,
doing this requires a series of non-trivial novelties which we
address in this paper. 1. CBFs are model-based Lyapunov
methods, thus, they can only be used with data and systems
with known dynamics. Here, we extend their formalism
to work with unknown dynamics by the properties of neu-
ral ODEs. 2. CBFs are typically applied to systems with
affine transformations. For neural ODEs with nonlinear
activations, the propagation of invariance sets becomes a
challenge. We fix this by incorporating a virtually linear
space within the neural ODE to find simple parameter/input
constraints.
Going back to Fig. 1C, we observe that by applying our
forward invariance propagation method, we can correct the
model and force the system trajectories to stay away from
the red obstacles while maintaining the path of the ground
truth spiral curve.
In summary, we make the following new contributions:
We incorporate formal guarantees into neural ODEs via
invariance set propagation.
We use the class of Higher-order CBFs (Xiao & Belta,
2022) to propagate the invariance set in neural ODEs while
addressing their challenges such as handling unknown
dynamics and the nonlinearity of the neural ODEs as well
as connecting the order concept in HOCBFs to that of
network depth in neural ODEs.
We demonstrate the effectiveness of our method on a vari-
ety of learning tasks and output specifications, including
the modeling of physical systems and the safety of neural
controllers for autonomous vehicles.
2. Preliminaries
In this section, we provide background on neural ODEs and
forward invariance in control theory.
2.1. Neural ODEs
A neural ordinary differential equation (ODE) is defined in
the form (Chen et al.,2018):
˙
x(t) = fθ(x(t)),(1)
where
nN
is state dimension,
xRn
is the state and
˙
x
denotes the time derivative of
x
,
fθ:RnRn
is a
neural network model parameterized by
θ
. The output of
the neural ODE is the integral solution of (1). It can also
take in external input, where the model is defined as:
˙
x(t) = f
θ(x(t),I(t)),(2)
where
nIN
is external input dimension,
I(t)RnI
,
f
θ:
Rn×RnIRn
is a neural network model parameterized
by θ. For notation convenience, we write as,
fθ=fθK,K+1 · · · fθ1,2(3)
where
K
is the number of layers,
fθk,k+1 , k [1, K]
is the forward process of the
k
’th layer, and we denote
zk= (fθk,k+1 ◦ · · · ◦ fθ1,2)(·)Rnk
the intermediate repre-
sentation at the
k
’th layer and
zK=˙
x
is the output.
nkN
denotes the number of neurons at layer kand nK=n.
2.2. Forward Invariance in Control Theory
Consider an affine control system of the form
˙
x=f(x) + g(x)u(4)
where
xRn
,
f:RnRn
and
g:RnRn×q
are
locally Lipschitz, and
uURq
, where
U
denotes a
control constraint set.
Definition 2.1. (Set invariance): A set
CRn
is forward
invariant for system (4) if its solutions for some
uU
starting at any x(t0)Csatisfy x(t)C, tt0.
Definition 2.2. (Relative degree): The relative degree of
a differentiable function
b:RnR
(or constraint
b(x)
0
) with respect to the system (4) is the number of times
b(x)
needs to be differentiated along dynamics (4) until
any component of
u
explicitly shows in the corresponding
derivative.
Definition 2.3. (Class
K
function): A Lipschitz continuous
function
α: [0, a)[0,), a > 0
belongs to class
K
if it
is strictly increasing and α(0) = 0.
Definition 2.4. (High Order Barrier Function (HOBF)):
A function
b:RnR
of relative degree
m
is a HOBF with
a sequence of functions
ψi:RnR
such that
ψm(x)0
,
ψi(x) := ˙
ψi1(x) + αi(ψi1(x)), i ∈ {1, . . . , m},
(5)
where
ψ0(x) := b(x)
and
αi(·)
is a
(mi)
’th order differ-
entiable class Kfunction. We define a sequence of sets,
Ci:= {xRn:ψi1(x)0}, i ∈ {1, . . . , m}.(6)
Definition 2.5. (High Order Control Barrier Function
(HOCBF) (Xiao & Belta,2022)): Let
ψi
and
Ci
be defined
by
(5)
and
(6)
, respectively, for
i∈ {1, . . . , m}
. A function
b:RnR
is a HOCBF of relative degree
m
if there exists
2
On the Forward Invariance of Neural ODEs
(mi)th
order differentiable class
K
functions
αi, i
{1, . . . , m}such that
sup
uU
[Lm
fb(x)+[LgLm1
fb(x)]u+O(b(x))
+αm(ψm1(x))] 0,
(7)
for all
xC1,...,Cm
.
Lf
and
Lg
denote Lie deriva-
tives w.r.t.
x
along
f
and
g
, respectively, and
O(b(x)) =
Pm1
i=1 Li
f(αmiψmi1)(x)
. The satisfaction of (7) is
equivalent to the satisfaction of ψm(x)0defined in (5).
The HOCBF is a general form of the CBF (Ames et al.,
2017) (a HOCBF with
m= 1
degenerates to a CBF), and
it can be applied to arbitrary relative degree systems, such
as the invariance propagation to nonlinear layers of a neural
ODE in this work.
Theorem 2.6. (Xiao & Belta,2022)): Given a HOCBF
b(x)
from Def. 2.5 with the sets
C1, . . . , Cm
defined by (6),
if
x(t0)C1,...,Cm
, then any Lipschitz continuous
controller
u(t)
that satisfies the constraint in (7),
tt0
renders C1,...,Cmforward invariant for system (4).
In this work, we map the forward invariance in control
theory to forward invariance in neural ODEs, where we
tackle arbitrary dynamics defined by neural ODE
fθ
(which
can be nonlinear as opposed to an affine control system in
the form of (4)).
3. Invariance Propagation
In this section, we present the theoretical framework of In-
variance Propagation (IP) to guarantee forward invariance
(in short, invariance) of a neural ODE. We first provide
formalisms of the proposed method. Then, we describe
invariance propagation to (i) linear layer (ii) nonlinear layer
(iii) external input.
Output Specification. A continuously differentiable func-
tion
h:RnR
constructs an output specification
h(x)0
for a neural ODE. Typical output specifications in-
clude system safety (e.g., collision avoidance in autonomous
driving), physical laws (e.g., energy conservation), mathe-
matical formulae (e.g., Cauchy Schwarz inequality), etc.
Definition 3.1. (Forward Invariance in Neural ODE):
The (forward) invariance of a neural ODE
(1)
or
(2)
with
fθ
is defined w.r.t. its output specification
h(x)0
such
that if
h(x(t0)) 0
, then
h(x(t)) 0,tt0
, where
x(t) = Rt
t0fθ(τ)
. Intuitively, this property guarantees
the satisfaction of output specification is forwarded in the
neural ODE across time.
Definition 3.2. (Invariance Propagation (IP)): Given an
output specification
h(x)0
and a neural ODE
fθ
, invari-
ance propagation describes a procedure to find a constraint
Ψ(d)0
, where
Ψ : RndR
and
d
(
ndN
is its dimen-
sion) is either (i) a subset of parameters
θ
, (ii) the external
input
I
, or (iii) other auxiliary variables for the neural ODE,
such that if
Ψ(d)0
, forward invariance defined in Def.
3.1 is satisfied. Intuitively, IP casts invariance w.r.t. output
specification to pose constraints on non-output in ODEs.
3.1. Invariance Propagation to Linear Layers
We start with a simple case where invariance is propagated
to linear layers of the neural ODE, which normally occurs
at the output layer without nonlinear activation functions.
Neural ODE Reformulation. Without loss of generality,
we follow (3) and assume a linear output layer fθK1,K ,
˙
x=
n2
X
i=1
θi
K1,K zi
K1=θP
K1,K zP
K1+θN
K1,K zN
K1
(8)
where
zK1= (fθK1,K2◦ · · · ◦ fθ1,2)(x)
,
θi
K1,K
is the
i
’th column of
θK1,K Rn×nK1
,
zi
K1
is the
i
’th entry
of
zK1RnK1
,
P
and
N
describe sets of columns that
are updatable parameters (that the invariance is propagated
to) and constants, respectively. We drop the bias term for
cleaner notation.
Propagation to Linear Layers. Our goal is to propagate the
invariance to a subset of parameters. We treat
θP
K1,K
as a
variable while taking other parameters
θN
K1,K
as constants.
Given an arbitrary output specification
h(x)0
, we can
define a ψ1function in the form:
ψ1(x, θP
K1,K ) := dh(x)
dxfθ(x) + α1(h(x)),(9)
where
α1(·)
is a class
K
function. Note that
θP
K1,K
is im-
plicitly defined in
fθ
. Combining
(9)
with
(8)
, the following
theorem shows the invariance of the neural ODE (1):
Theorem 3.3. Given a neural ODE as in (8) and an output
specification
h(x)0
, if there exist a class
K
function
α1
and θP
K1,K such that with ψ1as in (9),
Ψ(θP
K1,K |x) = ψ1(x, θP
K1,K )0,(10)
for all
x
such that
h(x)0
, where
Ψ(θP
K1,K |x) =
dh
dxθP
K1,K zP
K1+dh
dxθN
K1,K zN
K1+α1(h(x))
, then the
neural ODE is forward invariant.
The proof and existence of α1are shown in Appendix A.1.
Brief Summary. Thm. 3.3 provides a condition on the
parameter
θP
K1,K
that implies the invariance of the neural
ODE. In other words, by modifying the parameter θP
K1,K
such that (10) is always satisfied, we can guarantee the
invariance. The algorithm is shown in the next section.
Moreover, since we only need to take the derivative of
h(x)
3
On the Forward Invariance of Neural ODEs
Neural ODE
 
Output invariance
(spec.)
 
   
Hidden invariance
 
  
Auxiliary
input invariance
Layer K

Invariance Propagation
Outputs
Inputs
Layer K-1
……
Layer +1
……
Layer
Layer






……
Chain rule
Virtual
System
Figure 2.
Invariance propagation to an arbitrary layer of the neural
ODE with auxiliary virtual linear space.
once, as shown in (10), this is analogous to a first-order
HOCBF (i.e., m= 1 in Def. 2.5).
IP to the Proper Parameters. By using the Theorem 3.3 ,
we can propagate the invariance to neural ODE parameters
in linear layers. However, note that we need to choose the
parameters such that all the output of the neural ODE are
able to be changed by modifying the target parameters. Oth-
erwise, the output specification may fail to be guaranteed.
3.2. Invariance Propagation to Nonlinear Layers
In this section, we consider how we may efficiently propa-
gate the invariance to the weight parameters of an arbitrary
layer of the neural ODE (including the output layer with
nonlinear activation functions). Theoretically, we can prop-
agate the invariance to arbitrary layers using the existing
HOCBF theory. However, the resulting invariance enforce-
ment would be nonlinear programs (i.e., the HOCBF con-
straint (7) will be nonlinear in
u
), which are computation-
ally hard and inefficient to solve. Moreover, the formulation
above does not allow us to incorporate the IP in the training
loop to address the conservativeness of the invariance as
discussed next. Our method works for both (1) and (2), so
we only consider (1) for simplicity.
An Auxiliary Linear System. Given a neural ODE
(1)
, we
want to propagate the invariance to the partial parameter
(similar to
(8)
) at the
k
’th layer
θP
k,k+1 RnP
k+1×nk
with
k∈ {1, . . . , K}
and
nP
k+1 nk+1
. Then, we flatten the
matrix parameter
θP
k,k+1
to a vector form
θP
kRdP
k
row-
wise, where
dP
k=nP
k+1nk
is the dimension of the vector.
Instead of directly propagating the invariance to the parame-
ter
θP
k
and resulting nonlinear constraints, we propagate the
invariance to an auxiliary linear system:
˙
θP
k=AP
kθP
k+BP
kuP
k(11)
where
AP
kRdP
k×dP
k, BP
kRdP
k×dP
k
are chosen such
that the auxiliary system is controllable and
uP
kRdP
k
is
the auxiliary control input. The exact choice of
AP
k
and
BP
k
may slightly impact the performance, which is further
discussed in Appendix B. This specific formulation allows
performing IP on
uP
k
linearly (which will become clearer
later on) as opposed to directly on
θP
k
, which is susceptible
to nonlinearity. An overview is illustrated in Fig. 2.
Propagation to Auxiliary System. We first propagate the
invariance to parameter
θP
k
by defining a function
ψ1
similar
to (9), which is illustrated by the blue boxes in Fig. 2. Then,
we further propagate the invariance to the
up
k
in system (11)
by defining another function ψ2(the red box in Fig. 2):
ψ1(x, θP
k) := dh(x)
dxfθ(x) + α1(h(x)),
ψ2(x, uP
k) := ψ1
xfθ(x) + ψ1
θP
k
˙
θP
k+α2(ψ1(x, θP
k)),
(12)
where
α1(·), α2(·)
are class
K
functions and
ψ1, ψ2
are
defined in the similar spirit to
(5)
. Remark that different
from
(9)
, here,
ψ1
is nonlinear regarding
θP
k
yet the newly
introduced
ψ2
is linear to the auxiliary variable
uP
k
thanks
to
˙
θP
k
depicting a linear system w.r.t.
uP
k
as shown in
(11)
.
Combining (12) with (11), the following theorem shows the
invariance:
Theorem 3.4. Given a neural ODE defined by (1) and
an output specification
h(x)0
, if there exist class
K
functions α1, α2and uP
ksuch that with ψ1, ψ2as in (12),
Ψ(uP
k|x) = ψ2(x, uP
k)0,(13)
for all
x
that satisfies
h(x)0
and
ψ1(x)0
, where
Ψ(uP
k|x) = d2h(x)
dx2f2
θ(x)+ dh(x)
dx
fθ(x)
θP
k
(AP
kθP
k+BP
kuP
k)+
(dh(x)
dx
fθ(x)
x+1(h(x))
dx)fθ(x)+α2(ψ1(x))
, then the neu-
ral ODE is forward invariant.
The proof and existence of
α1, α2
are shown in Appendix
A.2. Intuitively, invariance is first propagated via
ψ1
to
θP
k
,
then via
ψ2
to
uP
k
, rendering a linear constraint in
(13)
. We
will further show how this enforces invariance in Sec. 4.2.
Brief Summary. Note that (13) is linear in
uP
k
with the
assistance of system (11). Instead of directly changing the
parameters of the neural ODE for the invariance as Sec. 3.1,
we find auxiliary control
uP
k
that satisfies the constraint (13)
to dynamically change the parameters. Also, since we take
the derivative of
h(x)
twice, as in (13), this is analogous to
a second-order HOCBF (i.e., m= 2 in Def. 2.5)
IP to the Proper Parameters. While Thm. 3.4 allows us to
propagate the invariance to arbitrary neural ODE parameters,
the choice of the parameters may affect the performance,
e.g., the model’s accuracy). The specific parameter choice
depends on the model structure and the task’s output specifi-
cation. In most cases, we may wish to choose the parameters
4
On the Forward Invariance of Neural ODEs
of the same layer to propagate the invariance to. However, it
is also possible to choose parameters of different layers, as
long as we define auxiliary dynamics for all the parameters
as in (11). The proposed method still works in such cases.
We may need to choose the parameters such that the output
of the neural ODE can all be changed, as in the linear case.
3.3. Invariance Propagation to External Input
We consider a neural ODE in the form of
(2)
with an external
input I.
Approach 1: As in Sec. 3.1, we may directly reformulate
(2) in the following affine form:
˙
x=fθ(x) + gθ(x)I,(14)
where
fθ
is defined as in (1),
gθ:RnRn×nI
is another
neural network parameterized by
θ
. Then, we can use the
similar technique as in Sec. 3.1 to propagate the invariance
to the external input Ias they are both in affine forms.
Approach 2: If we do wish to keep neural ODEs with
external input as in the form of (2), then we may define
auxiliary linear dynamics as in Sec. 3.2, and augment (2) by
the following form:
˙
x=f
θ(x,y),˙
y=Ay+BI,(15)
where
yRnI
is the auxiliary variable,
ARnI×nI, B
RnI×nI
are defined such that the linear system is controllable
(similar to
(11)
). Then, we can use the similar technique
as in Sec. 3.2 to propagate the invariance to the external
input
I
via the auxiliary variable
y
. In fact, the above neural
ODE becomes a stacked neural ODE, which will be further
studied (as discussed in Appendix E).
4. Enforcing Invariance in Neural ODEs
Here, we show how we proceed from the theoretical frame-
work in Sec. 3to efficient algorithms of IP on neural ODEs.
4.1. Algorithms for Linear layers
Enforcing invariance. Enforcing the invariance of a neural
ODE is equivalent to the satisfaction of the condition in
Thm. 3.3. Also by proof in Appendix A.1, we can always
find a class
K
function
α1(·)
such that there exists
θP
K1,K
that makes (10) satisfied if
h(x(t0)) 0
. If
h(x1(t0)) 0
,
then the output of the neural ODE will be driven to satisfy
h(x)0
when the constraint (10) in Thm. 3.3 is satis-
fied due to its Lyapunov property (Ames et al.,2012). The
enforcing of the invariance could vary in different applica-
tions and we do not restrict to exact methods. We provide a
minimum-deviation quadratic program (QP) approach.
Let
θP
K1,K Rn×nP
K1
denote the value of
θP
K1,K
dur-
ing training or after training. Then, we can formulate the
following optimization:
θP
K1,K = arg min
θP
K1,K
||θP
K,K1θP
K1,K ||2,s.t. (10),
(16)
where
|| · ||
denotes the Euclidean norm. The above opti-
mization becomes a QP with all other variables fixed except
θP
K1,K
. This solving method has been shown to work in
(Ames et al.,2017) (Glotfelter et al.,2017) (Xiao & Belta,
2022). At each discretization step, we solve the above QP
and get
θP
K1,K
. Then we set
θP
K1,K =θP
K1,K
during
the inference of the neural ODE. This way, we can enforce
the invariance, i.e., guarantee that
h(x(t)) 0,tt0
.
The process is summarized in Algorithm 1.
Complexity of Enforcing Invariance. The computational
complexity of the QP (16) is
O(q3)
, where
q=nP
K1n
.
When there is a set
S
of output specifications, we just add
the corresponding constraint (10) for each specification to
(16), and the number of constraints will not significantly
increase the complexity. It is also possible to get the closed-
form solution of the QP (Ames et al.,2017) when there are
only a few output specifications.
4.2. Algorithms for Nonlinear layers
Stability of Auxiliary Systems. In this case, we need to
make sure that the parameter
θP
k
of the neural ODE is stabi-
lized as it is dynamically controlled by (11). To enforce this,
we use control Lyapunov functions (CLFs) (Ames et al.,
2012). Specifically, for each
θP
kj, j ∈ {1...,dP
k}
, where
θP
kj
is a component of
θP
k
, we define a CLF
V(θP
kj) =
(θP
kjθP
kj)2
, where
θP
kj
is the value of
θP
kj
during or after
training. Then, any uP
kthat satisfies:
Φ(uP
k|θP
kj)0, j ∈ {1, . . . , dP
k},(17)
where
Φ(uP
k|θP
kj) = dV (θP
kj)
P
kj
(AP
kjθP
k+BP
kjuP
k)+ϵjV(θP
kj)
,
AP
kjR1×dP
k, BP
kjR1×dP
k
are the
j
’th rows of
AP
k, BP
k
in (11), respectively and
ϵj>0
, will render the auxiliary
systems (11) stable. The proof is in Appendix A.3.
Enforcing invariance. Enforcing the invariance of a neu-
ral ODE is equivalent to the satisfaction of the condition
in Thm. 3.4. By proof in Appendix A.2, we can always
find class
K
functions
α1, α2
such that there exists
uP
k
that
makes (13) satisfied if
h(x(t0)) >0
. Again, we provide a
minimum-deviation quadratic program (QP) approach:
(uP
k
1:dP
k) = arg min
uP
k1:dP
k
||uP
k||2+
dP
k
X
j=1
wjδ2
j,
s.t. (13) and Φ(uP
k|θP
kj)δj, j ∈ {1, . . . , dP
k},
(18)
5
摘要:

OntheForwardInvarianceofNeuralODEsWeiXiao1Tsun-HsuanWang1RaminHasani1MathiasLechner1YutongBan1ChuangGan2DanielaRus1AbstractWeproposeanewmethodtoensureneuralordi-narydifferentialequations(ODEs)satisfyoutputspecificationsbyusinginvariancesetpropaga-tion.Ourapproachusesaclassofcontrolbarrierfunctionsto...

展开>> 收起<<
On the Forward Invariance of Neural ODEs.pdf

共25页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:25 页 大小:1.93MB 格式:PDF 时间:2025-05-02

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 25
客服
关注