
An efficient combination strategy for hybird quantum ensemble classifier
Xiao-Ying Zhang1and Ming-Ming Wang1, ∗
1Shaanxi Key Laboratory of Clothing Intelligence, School of Computer Science,
Xi’an Polytechnic University, Xi’an 710048, China
(Dated: today)
Quantum machine learning has shown advantages in many ways compared to classical machine
learning. In machine learning, a difficult problem is how to learn a model with high robustness
and strong generalization ability from a limited feature space. Combining multiple models as base
learners, ensemble learning (EL) can effectively improve the accuracy, generalization ability, and
robustness of the final model. The key to EL lies in two aspects, the performance of base learners and
the choice of the combination strategy. Recently, quantum EL (QEL) has been studied. However,
existing combination strategies in QEL are inadequate in considering the accuracy and variance
among base learners. This paper presents a hybrid EL framework that combines quantum and
classical advantages. More importantly, we propose an efficient combination strategy for improving
the accuracy of classification in the framework. We verify the feasibility and efficiency of our
framework and strategy by using the MNIST dataset. Simulation results show that the hybrid EL
framework with our combination strategy not only has a higher accuracy and lower variance than
the single model without the ensemble, but also has a better accuracy than the majority voting and
the weighted voting strategies in most cases.
I. INTRODUCTION
Based on the basic principles of quantum mechanics, quantum computing provides new models for accelerating
solutions of some classical problems [1–3]. With the great success of machine learning [4, 5], quantum machine learning
[6, 7] have been developed to show characteristics of quantum acceleration. Typical examples include quantum neural
networks (QNNs) [8, 9], quantum principal component analysis [10], quantum support vector machine [11], quantum
unsupervised learning [12], quantum linear system algorithm for dense matrices [13], etc.
Neural networks are at the center of machine learning. As its counterpart, a variety types of QNN models have
been proposed since its first appearance [8, 9], which include models based on quantum dot [14], on superposition [15],
on quantum gate circuits [16], on quantum-walk [17], and quantum analogue of classical neurons [18], etc. In the last
few years, quantum deep learning [19], quantum convolutional neural networks (QCNNs) [20–22], quantum generative
adversarial network [23], quantum autoencoders [24], have also been developed. Most of QNNs are constructed by
parameterized quantum circuit (PQC) [25, 26]. In a PQC model, the number of parameters, the types of quantum
gates, and the width and depth of the circuit have a great impact on the required resources, the difficulty of solving
gradients, and whether the optimal model can be obtained [27, 28].
Ensemble learning (EL), also known as the multi-classifier system, is an important algorithm that combines multiple
learning models to achieve better performance. In 1990, Schapire pointed out that it is possible to surpass one strong
learner by combining several weak learners (base learners) in a roughly correct sense [29]. It lays the foundation for
the Adaboost algorithm [30]. EL has shown advantages in avoiding over-fitting, reducing the risk of decision error,
and reducing the acquisition of local minimum [30]. It has been widely used in object detection [31], education [32],
malware detection [33], etc. The performance of a EL mainly depends on the diversity and the prediction performance
of base learners. The diversity of the EL can be realized by using different model structures, training sets, method of
subsetting, and others [30]. While the prediction performance of the EL is correlated with the uncorrelated degree of
error among base learners [34]. An important part of a EL is the combination strategy for combinating the predictions
of base learners. Currently, combination strategies can be roughly divided into weight combination and meta-learning
methods. In terms of a EL in binary classification tasks, the classification accuracy of each base learner must be
better than random guessing, such that the EL can be effective. In addition, diversity among models should still be
maintained. There are many EL methods related to classical learning, such as AdaBoost, bagging, stacking, random
forest, and so on [35, 36].
For quantum computing, some studies have been performed on quantum ensemble learning (QEL) [37–40]. In Ref.
[37], Schuld et al. proposed a framework to construct ensembles of quantum classifiers, which evaluate the predictions
of exponentially large ensembles with parallelism. In Refs. [38, 39], a QEL scheme using the bagging was proposed
∗bluess1982@126.com
arXiv:2210.06785v1 [quant-ph] 13 Oct 2022