A kernel-based quantum random forest for improved classication Maiyuren Srikumar1Charles D. Hill1 2yand Lloyd C.L. Hollenberg1z 1School of Physics University of Melbourne VIC Parkville 3010 Australia.

2025-04-27 0 0 2.82MB 33 页 10玖币
侵权投诉
A kernel-based quantum random forest for improved classification
Maiyuren Srikumar,1, Charles D. Hill,1, 2, and Lloyd C.L. Hollenberg1,
1School of Physics, University of Melbourne, VIC, Parkville, 3010, Australia.
2School of Mathematics and Statistics, University of Melbourne, VIC, Parkville, 3010, Australia.
(Dated: February 21, 2023)
The emergence of Quantum Machine Learning (QML) to enhance traditional classical learning
methods has seen various limitations to its realisation. There is therefore an imperative to develop
quantum models with unique model hypotheses to attain expressional and computational advantage.
In this work we extend the linear quantum support vector machine (QSVM) with kernel function
computed through quantum kernel estimation (QKE), to form a decision tree classifier constructed
from a decision directed acyclic graph of QSVM nodes - the ensemble of which we term the quantum
random forest (QRF). To limit overfitting, we further extend the model to employ a low-rank
Nystr¨om approximation to the kernel matrix. We provide generalisation error bounds on the model
and theoretical guarantees to limit errors due to finite sampling on the Nystr¨om-QKE strategy. In
doing so, we show that we can achieve lower sampling complexity when compared to QKE. We
numerically illustrate the effect of varying model hyperparameters and finally demonstrate that the
QRF is able obtain superior performance over QSVMs, while also requiring fewer kernel estimations.
I. INTRODUCTION
The field of quantum machine learning (QML) is cur-
rently in its infancy and there exist, not only questions of
advantages over its classical counter-part, but also ques-
tions of its realisability in real-world applications. Many
early QML approaches [1,2] utilised the HHL algorithm
[3] to obtain speed-ups on the linear algebraic compu-
tations behind many learning methods. However, such
methods presume efficient preparation of quantum states
or quantum access to data, limiting practicality [4].
Currently, the two leading contenders for near-term
supervised QML methods are quantum neural networks
(QNNs) [5,6] and quantum kernel methods [7,8]. QNNs
constructed from parameterised quantum circuits embed
data into a quantum feature space and train parameters
to minimise a loss function of some observable of the
circuit. However, such variational algorithms face prob-
lems of barren plateaus in optimisation [911] which hin-
der the feasibility of training the model. In contrast,
quantum kernel methods suggest a non-parametric ap-
proach whereby only the inner product of quantum em-
bedded data points are estimated on a quantum device
– a process known as quantum kernel estimation (QKE)
[7,8]. We refer to the model utilising QKE for con-
structing a support vector machine (SVM), as a quan-
tum-SVM (QSVM) – the most commonly explored quan-
tum kernel method. Despite rigorous proofs of quan-
tum advantage using a QSVM [12], they are not without
their limitations for arbitrary real-world datasets. Van-
ishing kernel elements [13] subsequently require a large
number of quantum circuit samples (suggesting a deep
connection with barren plateaus [14]) and indications
srikumarm@unimelb.edu.au
cdhill@unimelb.edu.au
lloydch@unimelb.edu.au
that quantum kernels fail to generalise with the addi-
tion of qubits without the careful encoding of the correct
problem-dependent inductive bias [14]. Furthermore, the
kernel approach quadratically grows in complexity with
the number of training samples, as opposed to the lin-
ear scaling of QNNs. This becomes untenable for large
datasets. These limitations indicate that further work is
required to practically deploy QML techniques.
The close similarity of QNNs and quantum kernel
methods [15] highlight the importance of developing al-
ternate QML models that present distinct model hy-
potheses. In this work, we propose a quantum random
forest (QRF) that fundamentally cannot be expressed
through a kernel machine due to its discontinuous de-
cision tree structure. Such a model is inspired by the
classical random forest (CRF) [16].
The CRF has shown to be an extremely successful
model in machine learning applications. However, there
exist many barriers permitting the entry of an analogous
QRF. The main challenge is the determination of an algo-
rithm that is not limited by either the intrinsically quan-
tum complication of the no-cloning theorem [17], or the
hard problem of efficiently loading data in superposition
[4]. As opposed to previous works [18,19] that employ
Grover’s algorithm with input states of specific form, the
QRF proposed in this work is ultimately an ensemble
of weak classifiers referred to as quantum decision trees
(QDTs) with an approximate QSVM forming the split
function – shown in Figure 1. Crucially, each QDT has
random components that distinguishes it from the en-
semble. The aim is to form a collection of uncorrelated
weak QDT classifiers so that each classifier provides dis-
tinct additional information. The possible benefits of an
ensemble of quantum classifiers have been expressed [20],
with recent suggestions of boosting techniques for ensem-
bles of QSVMs [21]. However, the distinctive tree struc-
ture has not yet been explored. Not only does this allow
for discontinuous model hypotheses, we will see that the
structure naturally accommodates multi-class supervised
arXiv:2210.02355v2 [quant-ph] 19 Feb 2023
2
problems without further computation. In comparison,
QSVMs would require one-against-all (OAA) and one-
against-one (OAO) [22] strategies which would respec-
tively require constructing |C| and |C|(|C|1)/2 separate
QSVMs, where Cis the set of classes in the problem.
The expressive advantage of the QRF is predicated on
the distinguishability of data when mapped to a higher
dimensional feature space. This proposition is supported
by Cover’s Theorem [23], which states that data is more
likely to be separable when projected to a sparse higher
dimensional space. Cover’s result is also a principal mo-
tivator for developing both quantum and classical kernel
methods. However, it is often the case that generalisa-
tion suffers at the expense of a more expressive model – a
phenomena commonly known through the bias-variance
trade-off in statistical learning. To counter the expressiv-
ity of the (already quite expressive) quantum model, we
make a randomised low-rank Nystr¨om approximation [24]
of the kernel matrix supplied to the SVM, limiting the
ability of the model to overfit. We refer to this strategy
as Nystr¨om-QKE (NQKE). We show that the inclusion
of this approximation further allows for a reduced circuit
sampling complexity.
The paper is structured as follows: in Section II we ad-
dress the exact form of the QRF model, before providing
theoretical results in III. This is followed by numerical re-
sults and discussion in Section IV. Background to many
of the ML techniques are presented in Appendix A.
II. THE QUANTUM RANDOM FOREST
MODEL
The Quantum Random Forest (QRF) is composed
of a set of Tdistinct quantum decision trees (QDTs),
{Qt}T
t=1, which form the weak independent classifiers of
the QRF ensemble. This is illustrated in Figure 1. As
with all supervised ML methods, the QDTs are given the
set of Ntraining instances, xtrain
iRD, and their associ-
ated ncclass labels, ytrain
i∈ C ={0,1, ..., nc1}, to learn
from the annotated data set, S={(xtrain
i, ytrain
i)}N
i=1
sampled from some underlying distribution D. Each tree
is trained using NpN– which we will refer to as
the partition size – instances sampled from the original
dataset S. This acts as both a regularisation strategy, as
well as a way to reduce the time complexity of the model.
Such a technique is known as bagging [25] or bootstrap
aggregation and it has the ability to greatly minimise the
probability of overfitting.
Once each classifier is trained, the model is tested with
a separate set, Stest of size N0, from which we compare
the predicted class distribution with the real. To obtain
the overall prediction of the QRF model, predictions of
each QDT is averaged across the ensemble. Specifically,
each QDT returns a probability distribution, Qt(~x;c) =
Pr(c|~x) for c∈ C, with the overall prediction as the class
with the greatest mean across the ensemble. Hence, given
an input, ~x, we have prediction of the form,
ey(~x) = arg max
c∈C (1
T
T
X
t=1
Qt(~x;c)).(1)
The accuracy of the model can be obtained with
the comparison of predictions with the real labels,
1/N0P(~x,y)∈Stest δy,ey(~x)where δi,j is the Kronecker delta.
The distinction between training and testing data is gen-
erally critical to ensure that the model is not overfitting
to the trained dataset and hence generalises to the pre-
diction of previously unknown data points. We now elab-
orate on the structure of the QDT model.
A. A Quantum Decision Tree
The Quantum Decision Tree (QDT) proposed in this
work is an independent classification algorithm that has a
directed graph structure identical to that of a binary de-
cision tree – illustrated in Figure 1and elaborated further
in Appendix A 1. Each vertex, also referred to as a node,
can either be a split or leaf node, distinguished by colour
in Figure 1. A leaf node determines the classification
output of the QDT, while a split node separates inserted
data points into two partitions that subsequently con-
tinue down the tree. The split effectiveness is measured
by the reduction in entropy, referred to as the informa-
tion gain (IG). Given a labelled data set, S, that is split
into partitions SLand SRby the split function, we have,
IG(S;SL,SR) = H(S)X
i∈{L,R}
|Si|
|S| H(Si),(2)
where His the entropy over the class distribution defined
as H(S) = Pc∈C Pr(c) log2Pr(c) where class c∈ C oc-
curs with probability Pr(c) in the set S. Clearly the IG
will increase as the splitting more distinctly separates in-
stances of different classes. Using information gain is ad-
vantageous especially when more than two classes are in-
volved and an accuracy-based effectiveness measure fails.
Given a partitioned training set at the root node, the
QDT carries out this splitting process to the leaves of
the tree where a prediction of class distribution is made
from the proportions of remaining data points in each
class. Mathematically, with a subset of data points S(l)
supplied at training to leaf `(l)indexed by l, we set its
prediction as the probability distribution,
`(l)(S(l);c) = 1
|S(l)|X
(~x,y)∈S(l)y=c,(3)
where [p] is the Iversion bracket that returns 1 when the
proposition pis true and 0 otherwise. When training the
model, a node is defined a leaf if any of the following con-
ditions are met, (i) the training instances supplied to the
node are of a single class – in which case further splitting
is unnecessary, (ii) when the number of data points left
3
FIG. 1. The quantum random forest (QRF) constitutes an ensemble of classifiers, referred to as quantum decision trees (QDTs).
This tree structure is in turn a directed graph structure of nodes defined by the split function NΦ,L :RD→ {−1,+1}and Φ, L
labels the type of function. The split function is an SVM that admits a Nystr¨om approximated quantum kernel,
e
K. The kernel
is defined through a chosen embedding Φ where Kij =|hΦ(xj)|Φ(xi)i|2and |Φ(xi)i=U(xi)|0ifor parameterised unitary U.
The randomly selected Lpoints for Nystr¨om approximation determine the hyperplane generated in the feature space, HΦ.
for splitting is smaller than a user defined value, ms, ie.
|S(l)| ≤ ms, or (iii) the node is at the maximum depth
of the tree, d, defined by the user. Once trained, predic-
tion occurs by following an instance down the tree until
it reaches a leaf node. The unique path from root to leaf
is referred to as the evaluation path. At this end point,
the prediction of the QDT is the probability distribution
held within the leaf, defined at training in equation (3).
The ensemble of predictions made by distinct QDTs are
subsequently combined with Eq. (1).
The crux of the QDT model lies at the performance
of split nodes, with the lth node defined through a split
function, N(l)
θ:X → {−1,+1}, where θare (hyper-)
parameters of the split function, that are either man-
ually selected prior to, or algorithmically tuned dur-
ing, the training phase of the model – the specifics are
elaborated in the next section. Instances are divided
into partitions S={(~x, y)∈ S|N(l)
θ(~x) = 1}and
S+={(~x, y)∈ S|N(l)
θ(~x) = 1}. At this point it is nat-
ural to refer to the information gain of a split function
as, IG(S|N(l)
θ) := IG(S;S,S+) with the aim being to
maximise IG(S|Nθ) at each node. In essence, we desire
a tree that is able to separate points so that instances of
different classes terminate at different leaf nodes. The in-
tention being that a test instance that follows the unique
evaluation path to a particular node is most likely to be
similar to the instances that followed the same path dur-
ing training. Hence, effective split functions amplify the
separation of classes down the tree. However, it remains
important that the split function generalises, as repeated
splitting over a deep tree can easily lead to overfitting.
It is for this reason decision trees often employ simple
split functions to avoid having a highly expressive model.
Namely, the commonly used CART algorithm [26] has a
threshold function for a single attribute at each node.
In the next section we specify the split function to
have the form of a support vector machine (SVM) that
employs a quantum kernel. The core motivation being to
generate a separating hyperplane in a higher dimensional
quantum feature space that can more readily distinguish
instances of different classes. However, this is a complex
method that will inevitably lead to overfitting. Hence, we
take an approximate approach to reduce its effectiveness
without stifling possible quantum advantage.
B. Nystr¨om Quantum Kernel Estimation
The proposed split function generates a separating hy-
perplane through a kernelised SVM. The kernel matrix
(or Gram matrix) is partially computed using Quantum
Kernel Estimation (QKE) [27] and completed using the
Nystr¨om approximation method. We therefore refer to
this combined process as Nystr¨om Quantum Kernel Es-
timation (NQKE) with its parameters determining the
core performance of a splitting node. Background on
SVMs and a deeper discussion on the Nystr¨om method
are supplied in Appendix A 2.
Given a dataset S(i)={(xj, yj)}N(i)
j=1 to the ith split
node, the Nystr¨om method randomly selects a set of L
landmarks points S(i)
L⊆ S(i). Without loss of generality
we may assume S(i)
L={(xj, yj)}L
j=1. The inner product
between elements in S(i)and S(i)
Lare computed using
a quantum kernel defined through the inner product of
parameterised density matrices,
kΦ(x0, x00) = Tr[ρΦ(x0)ρΦ(x00)],(4)
where we have the spectral decomposition ρΦ(x) =
Pjλj|Φj(x)ihΦj(x)|with parameterised pure states
{|Φj(x)i}jthat determine the quantum feature map,
xρ(x). In practice, ρΦare pure states re-
ducing the trace to k(x0, x00) = |hΦ(x00)|Φ(x0)i|2=
4
|h0|U(x00)U(x0)|0i|2, allowing one to obtain an estimate
of the kernel by sampling the probability of measuring |0i
on the U(x00)U(x0)|0istate.
The Nystr¨om method can be used for the matrix com-
pletion of positive semi-definite matrices. Using the sub-
set of columns measured through the inner product of
elements between Sand SLfor an arbitrary node (drop-
ping the the node label for brevity), which we define as
the N×Lmatrix G:= [W, B]>where Gij =k(xi, xj),
iN,jLand WRL×L,BRL×(NL), we
complete the N×Nkernel matrix by making the ap-
proximation KGW 1G>. Expanding, we have,
Kb
K:= "W B
B>B>W1B#,(5)
where, in general, W1is the Moore-Penrose gneralised
inverse of matrix W. This becomes important in cases
where Wis singular. Intuitively, the Nystr¨om method
utilises correlations between sampled columns of the ker-
nel matrix to form a low-rank approximation of the full
matrix. This means that the approximation suffers when
the underlying matrix Kis close to full-rank. How-
ever, the manifold hypothesis [25] suggests that choos-
ing L << N is not entirely futile as data generally does
not explore all degrees of freedom and often lie on a sub-
manifold, where a low-rank approximation is reasonable.
The Nystr¨om approximation does not change the
positive semi-definiteness of the kernel matrix, b
K
0, and the SVM optimisation problem remains con-
vex. Furthermore, the reproducing kernel Hilbert
space (RKHS), induced by the adjusted kernel, ˆ
k, is
a nonparametric class of functions of the form, HΦ=
nfΦfΦ(·) = PN
i=1 α0
iˆ
kΦ(·, xi), α0
iRo, which holds as a
result of the celebrated representer theorem [28,29]. The
specific set of {α0
i}N
i=1 for a given dataset is obtained
through solving the SVM quadratic program. This con-
structs a split function that has the form,
NΦ;α(x) = sign "N
X
i=1
αieyiˆ
kΦ(x, xi)#,(6)
where α0
i=αieyi,αi0 and eyi=F(yi) with the function
F:C → {−1,1}mapping the original class labels, yi∈ C
– which in general can be from a set of many classes,
i.e. |C| >2, to a binary class problem. In Section IV,
we provide numerical comparisons between two possible
approaches of defining the function F, namely, (i) one-
against-all (OAA), and (ii) even-split (ES) strategies –
see Appendix B 3 for more details.
The construction of a split function of this form adopts
arandomised node optimisation (RNO) [30] approach to
ensuring that the correlation between QDTs are kept to
a minimum. The randomness is injected through the
selection of landmark data points, as the optimised hy-
perplane will differ depending on the subset chosen. Fur-
thermore, there exist possibilities to vary hyperparam-
eters, Φ and Lboth, down and across trees. This is
depicted in Figure 1with split functions notated with
a separate tuple (Φi, Li) for depths i= 1, ..., D 1 of
the tree. The specific kernel defined by an embedding Φ
implies a unique Hilbert space of functions for which kΦ
is the reproducing kernel [31]. The two types of embed-
dings numerically explored are defined in Appendix B 2.
The QRF approach of employing distinct kernels at each
depth of the tree, gives a more expressive enhancement
to the QSVM method. A simple ensemble of independent
QSVMs would require greater computational effort and
would also lose the tree structure present in the QRF.
III. THEORETICAL RESULTS
QML literature has seen a push towards understanding
the performance of models through the well established
perspective of statistical learning theory. There has re-
cently been a great deal of work determining the gener-
alisation error for various quantum models [3235] that
illuminate optimal strategies for constructing quantum
learning algorithms. This is often achieved by bounding
the generalisation error (or risk) of a model producing
some hypothesis hH,R(h) = Pr(x,y)∼D[h(x)6=y],
where Dindicates the underlying distribution of the
dataset. However, Dis unknown and one simply has
a set of samples S ∼ DN. We therefore define the empir-
ical error of a particular hypothesis has the error over
known points, b
R(h) = 1
NP(xi,yi)∈S 1h(xi)6=yi. This is
often referred to as the training error associated with a
model and is equivalent to (1 accuracy). Achieving a
small b
R(h) however does not guarantee a model capable
of predicting new instances. Hence providing an upper
bound to the generalisation error R(h) is imperative in
understanding the performance of a learning model.
The QDT proposed in this work can be shown to gener-
alise well in the case where the margins at split nodes are
large. This is guaranteed by the following Lemma which
builds on the generalisation error of perceptron decision
trees [36], with proof supplied in Appendix B 5.
Lemma 1. (Generalisation error of Quantum Decision
Trees) Let HJbe the hypothesis set of all QDTs composed
of Jsplit nodes. Suppose we have a QDT, hHJ, with
kernel matrix K(i)and labels {y(i)
j}N(i)
j=1 given at the ith
node. Given that minstances are correctly classified, with
high probability we can bound the generalisation error,
R(h)e
O 1
m"Jlog 4mJ2
+ log(4m)2
J
X
i=1
N(i)
X
j,k=1
y(i)
jy(i)
k(K(i)+)jk#!,
(7)
where K(i)
jk =|hΦ(x(i)
j)|Φ(x(i)
k)i|2and {y(i)
j}N(i)
j=1 are re-
spectively the Gram matrix and binary labels given to the
5
ith node in the tree, and K+denotes the Moore-Penrose
generalised inverse of matrix K.
The term sK=PN(i)
j,k=1 y(i)
jy(i)
k(K(i)+)jk is referred to as
the model complexity and appears in the generalisation
error bounds for kernel methods [33]. The term is both
inversely related to the kernel target alignment [37] mea-
sure that indicates how well a specific kernel function
aligns with a given dataset, as well as being an upper
bound to the reciprocal of the geometric margin of the
SVM. Therefore, Lemma 1highlights the fact that, for a
given training error, a QDT with split nodes exhibiting
larger margins (and smaller model complexities) are more
likely to generalise. Crucially, the generalisation bound is
not dependent on the dimension of the feature space but
rather the margins produced. This is in fact a common
strategy for bounding kernel methods [38], as there are
cases in which kernels represent inner products of vectors
in infinite dimensional spaces [22]. In the case of quan-
tum kernels, bounds based on the Vapnik-Chervonenkis
(VC) dimension would grow exponentially with the num-
ber of qubits. The result of Lemma 1therefore suggests
that large margins are analogous to working in a lower
VC class. However, the disadvantage is that the bound
is a function of the dataset and there are no a priori
guarantees that the problem exhibits large margins.
The Nystr¨om approximated quantum kernel matrix
used in the optimisation of the SVM is also a low rank
approximation, with estimated elements that are subject
to finite sampling error. This prompts an analysis on the
error introduced – stated in the following Lemma.
Lemma 2. Given a set of data points S={xi}N
i=1 and
a quantum kernel function kwith an associated Gram
matrix Kij =k(xi, xj)for i, j = 1, ..., N, we choose a
random subset SL⊆ S of size LNwhich we define as
SL={xi}L
i=1 without loss of generality. Estimating each
matrix element with MBernoulli trials such that e
Kij =
(1/M)PM
p=1 e
K(p)
ij with e
K(p)
ij Bernoulli(k(xi, xj)) for
iN,jL, the error in the Nystr¨om completed N×N
matrix, e
K, can be bounded with high probability,
||Ke
K||2e
O NL
M+N
L!,(8)
where e
Ohides the log terms.
The proof is given in Appendix B 5. This Lemma in-
dicates that there are two competing interests when it
comes to reducing the error with respect to the expected
kernel. Increasing the number of landmark points Lre-
duces the number of elements estimated and hence the
error introduced by finite sampling noise. On the other
hand, the Nystr¨om approximation becomes less accurate
as expressed through the second term in Eq. (8). How-
ever, it is important to realise that the approximation
error of the Nystr¨om method was by design, in an at-
tempt to weaken the effectiveness of the split function.
To understand the effect of finite sampling error on the
SVM model produced, we employ Lemma 2to show that
we can bound error in the model output – proof in B 5.
Lemma 3. Let f(·) = Piαik(·, xi)be the ideal Nystr¨om
approximated model and e
f(·) = Piα0
ie
k(·, xi)be the equiv-
alent perturbed solution as a result of additive finite sam-
pling error on kernel estimations. With high probability,
we can bound the error as,
|f(x)e
f(x)|≤OM N4/3L
M!(9)
where OMexpresses the term hardest to suppress by M.
Lemma 3indicates that M∼ O(N3L) will suffice to
suppress errors from sampling. This is in comparison
to M∼ O(N4) for without the Nystr¨om approximation
[12]. However, note that this is to approximate the linear
function ffor which the SVM has some robustness with
only sign[f(·)] (6) required. A caveat to also consider is
that it is not necessarily true that a smaller ||Ke
K||2
or |f(x)e
f(x)|will give greater model performance [39].
To address the complexity of the QRF model, for an er-
ror of =O(1/M) on kernel estimations, we can show
that training requires O(T L(d1)N2) samples and
single instance prediction complexity of O(T L(d1)2)
circuit samples – discussed further in Appendix B 4. The
main result here is that we are no longer required to
estimate O(N2) elements, as is the case for QSVMs.
Though this is a profound reduction for larger datasets
with L << N, it should be noted that datasets may re-
quire L=O(N). Nonetheless, the number of estimations
will never be greater than N2, on the basis that kernel
estimations are stored in memory across trees.
Finally, to show that the QRF contains hypotheses un-
learnable by both classical learners and linear quantum
models, we extend the concept class generated with the
discrete logarithm problem (DLP) in [12], from a single
dimensional clustering problem to one in two dimensions.
We construct concepts that separate the 2D log-space
(torus) into four regions that can not be differentiated
by a single hyperplane. Hence, the class is unlearnable
by linear QNNs and quantum kernel machines – even
with an appropriate DLP quantum feature map. Classi-
cal learners are also unable to learn such concepts due to
the assumed hardness of the DLP problem. Since QDTs
essentially create multiple hyerplanes in feature space,
there exists fHQDT to emulate a concept in this class.
Further details are presented in Appendix B 7.
IV. NUMERICAL RESULTS & DISCUSSION
The broad structure of the QRF allows for models to
be designed specifically for certain problems. This in-
cludes the selection of a set of embeddings at each level
of the tree that distinguish data points based on different
摘要:

Akernel-basedquantumrandomforestforimprovedclassi cationMaiyurenSrikumar,1,CharlesD.Hill,1,2,yandLloydC.L.Hollenberg1,z1SchoolofPhysics,UniversityofMelbourne,VIC,Parkville,3010,Australia.2SchoolofMathematicsandStatistics,UniversityofMelbourne,VIC,Parkville,3010,Australia.(Dated:February21,2023)Thee...

展开>> 收起<<
A kernel-based quantum random forest for improved classication Maiyuren Srikumar1Charles D. Hill1 2yand Lloyd C.L. Hollenberg1z 1School of Physics University of Melbourne VIC Parkville 3010 Australia..pdf

共33页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:33 页 大小:2.82MB 格式:PDF 时间:2025-04-27

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 33
客服
关注