An ecient combination strategy for hybird quantum ensemble classier Xiao-Ying Zhang1and Ming-Ming Wang1 1Shaanxi Key Laboratory of Clothing Intelligence School of Computer Science

2025-04-26 0 0 580.4KB 9 页 10玖币
侵权投诉
An efficient combination strategy for hybird quantum ensemble classifier
Xiao-Ying Zhang1and Ming-Ming Wang1,
1Shaanxi Key Laboratory of Clothing Intelligence, School of Computer Science,
Xi’an Polytechnic University, Xi’an 710048, China
(Dated: today)
Quantum machine learning has shown advantages in many ways compared to classical machine
learning. In machine learning, a difficult problem is how to learn a model with high robustness
and strong generalization ability from a limited feature space. Combining multiple models as base
learners, ensemble learning (EL) can effectively improve the accuracy, generalization ability, and
robustness of the final model. The key to EL lies in two aspects, the performance of base learners and
the choice of the combination strategy. Recently, quantum EL (QEL) has been studied. However,
existing combination strategies in QEL are inadequate in considering the accuracy and variance
among base learners. This paper presents a hybrid EL framework that combines quantum and
classical advantages. More importantly, we propose an efficient combination strategy for improving
the accuracy of classification in the framework. We verify the feasibility and efficiency of our
framework and strategy by using the MNIST dataset. Simulation results show that the hybrid EL
framework with our combination strategy not only has a higher accuracy and lower variance than
the single model without the ensemble, but also has a better accuracy than the majority voting and
the weighted voting strategies in most cases.
I. INTRODUCTION
Based on the basic principles of quantum mechanics, quantum computing provides new models for accelerating
solutions of some classical problems [1–3]. With the great success of machine learning [4, 5], quantum machine learning
[6, 7] have been developed to show characteristics of quantum acceleration. Typical examples include quantum neural
networks (QNNs) [8, 9], quantum principal component analysis [10], quantum support vector machine [11], quantum
unsupervised learning [12], quantum linear system algorithm for dense matrices [13], etc.
Neural networks are at the center of machine learning. As its counterpart, a variety types of QNN models have
been proposed since its first appearance [8, 9], which include models based on quantum dot [14], on superposition [15],
on quantum gate circuits [16], on quantum-walk [17], and quantum analogue of classical neurons [18], etc. In the last
few years, quantum deep learning [19], quantum convolutional neural networks (QCNNs) [20–22], quantum generative
adversarial network [23], quantum autoencoders [24], have also been developed. Most of QNNs are constructed by
parameterized quantum circuit (PQC) [25, 26]. In a PQC model, the number of parameters, the types of quantum
gates, and the width and depth of the circuit have a great impact on the required resources, the difficulty of solving
gradients, and whether the optimal model can be obtained [27, 28].
Ensemble learning (EL), also known as the multi-classifier system, is an important algorithm that combines multiple
learning models to achieve better performance. In 1990, Schapire pointed out that it is possible to surpass one strong
learner by combining several weak learners (base learners) in a roughly correct sense [29]. It lays the foundation for
the Adaboost algorithm [30]. EL has shown advantages in avoiding over-fitting, reducing the risk of decision error,
and reducing the acquisition of local minimum [30]. It has been widely used in object detection [31], education [32],
malware detection [33], etc. The performance of a EL mainly depends on the diversity and the prediction performance
of base learners. The diversity of the EL can be realized by using different model structures, training sets, method of
subsetting, and others [30]. While the prediction performance of the EL is correlated with the uncorrelated degree of
error among base learners [34]. An important part of a EL is the combination strategy for combinating the predictions
of base learners. Currently, combination strategies can be roughly divided into weight combination and meta-learning
methods. In terms of a EL in binary classification tasks, the classification accuracy of each base learner must be
better than random guessing, such that the EL can be effective. In addition, diversity among models should still be
maintained. There are many EL methods related to classical learning, such as AdaBoost, bagging, stacking, random
forest, and so on [35, 36].
For quantum computing, some studies have been performed on quantum ensemble learning (QEL) [37–40]. In Ref.
[37], Schuld et al. proposed a framework to construct ensembles of quantum classifiers, which evaluate the predictions
of exponentially large ensembles with parallelism. In Refs. [38, 39], a QEL scheme using the bagging was proposed
bluess1982@126.com
arXiv:2210.06785v1 [quant-ph] 13 Oct 2022
2
by using quantum cosine classifier as base learners, and numerical simulation experiments were carried out. However,
their quantum base learner is untrained. The implementation of the classifier is just for one test at one time, which
is not suitable for practical applications. Based on Ref. [37], Araujo et al. [40] further proposed the QEL of trained
classifiers, which shown the advantage of QEL for quantum classifiers. But the increasing of data size optimization
steps has a great impact on the model since it is difficult to execute on a quantum computer. Their combination
process can not effectively differentiate the difference among base learners, which will impair the accuracy of the
result.
In this paper, a quantum-classical hybrid EL model is proposed. That is, quantum classifiers are used as base
learners, and the bagging EL method [41] is used to assemble quantum classifiers in a classical computer. Since the
previous QEL framework [40] can not clearly distinguish the differences among base learners, while classical combi-
nation strategies can not distinguish the similarity among base learners, we propose a new combination strategy that
considers the difference among base learners and the performance of each learner in our framework. The MindQuan-
tum platform [42] is used as the training platform for a single quantum base learner. The MNIST handwritten digits
[43] are used as the dataset. The feasibility of our strategy is verified by using the homogenous ensemble method that
uses the same structural base learner as ensemble members.
II. ENSEMBLE LEARNING
EL mainly divided into the “homogeneous” ensemble and the “heterogeneous” ensemble, where their main difference
lies in whether the models of classifiers adopt the same model structure. A single learner in EL is called a base learner.
The key to the final effectiveness of the ensemble lies in whether the characteristic with “good but different” can be
maintained among the base learners. That is, each base learner should have a certain accuracy that is better than
random guess and there are certain differences between them [30, 44, 45]. For a binary classification problem, assuming
that the accuracy rate of each base classifier is pand the base classifiers are independent of each other, a simple voting
method is adopted to combine Nbase classifiers, then the error rate of the ensemble result is [45]
N/2
X
k=0 N
kpk(1 p)Nk.(1)
Under ideal conditions, the error rate will gradually decrease and eventually approach 0 with the increase of the
number of base classifiers. Assuming the accuracies of the two classifiers are acc1and acc2with 1
2acc1acc21.
The interval of similarity of two classifiers is [acc1(1 acc2), acc1+ (1 acc2)]. The higher the accuracy of the base
classifier is, the higher the similarity between models will be, and the smaller the difference will be, in which case the
ensemble is likely to be invalid. So it is almost impossible to reach ideal conditions.
In addition to the performance requirements among base learners and their models, the combination strategy is
also an important factor affecting the results. In general, models need to be pruned before using the combination
strategy, i.e., to determine which models can participate in the combination. For example, the sorting or search-based
strategies can be used to filter models [41]. The models’ predictions are then combined. Commonly used combination
strategies include the averaging strategy, voting strategy, and learning strategy [46]. Among them, the voting strategy
is divided into absolute majority voting, pluraity voting and weighted voting [45].
III. HYBRID QUANTUM-CLASSICAL ENSEMBLE LEARNING
We present a hybrid EL framework that combines quantum computation with classical computation. It takes
advantage of the parallelism of quantum computing and the convenience of classical computer processing parameters.
The hybrid learning framework is shown in Fig. 1. Taking the image classification task as an example, the image
dataset is firstly reduced in the classical computer to preserve 2nfeatures, where nis the number of qubits used by the
quantum base learner. The dataset is divided into a training set and a test set. By random sampling, Nsubsets are
got from the training set, where Nis the number of the quantum base learner. These data are encoded into quantum
states before the quantum base learners are trained in a quantum computer. The training of a quantum base learner
can be completed in parallel by multiprocessing. Each quantum base learner predicts for the test set and outputs the
decision result as the final result on the classical computer according to the combination strategy. fig3
摘要:

Anecientcombinationstrategyforhybirdquantumensembleclassi erXiao-YingZhang1andMing-MingWang1,1ShaanxiKeyLaboratoryofClothingIntelligence,SchoolofComputerScience,Xi'anPolytechnicUniversity,Xi'an710048,China(Dated:today)Quantummachinelearninghasshownadvantagesinmanywayscomparedtoclassicalmachinelear...

展开>> 收起<<
An ecient combination strategy for hybird quantum ensemble classier Xiao-Ying Zhang1and Ming-Ming Wang1 1Shaanxi Key Laboratory of Clothing Intelligence School of Computer Science.pdf

共9页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:9 页 大小:580.4KB 格式:PDF 时间:2025-04-26

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 9
客服
关注