Many MCC adaptive filtering algorithms have been developed [7, 8, 9, 10, 11, 12, 13]. They can be classified into two
categories: stochastic gradient type algorithms [7, 8, 9] and recursive MCC (RMCC) type algorithms [10, 11, 12, 13]. The
design of RMCC is somewhat similar to that of the RLS. In [7], an MCC algorithm was proposed for the environment with
impulsive noise and its theoretical excess mean square error (MSE) was derived using the energy conservation principle [8].
In [9], a generalized Gaussian density (GGD) cost function was employed to replace the Gaussian kernel used in original
MCC algorithm, leading to a generalized maximum correntropy criterion (GMCC) algorithm. Compared with stochastic
gradient type algorithms, the RMCC type algorithms can achieve superlinear convergence [10, 11, 12].
Motivated by the success of sparsity-aware MMSE/LS adaptive filtering algorithms [14, 15, 16, 17, 18], the design of
sparse MCC adaptive filtering algorithms have recently attracted great attentions [19, 20, 21, 22, 23]. In [19], a correntropy
induced metric (CIM) MCC (CIMMCC) algorithm has been designed by exploiting the CIM to approximate the l0norm
in the cost function. In [20, 21], the l1norm and a general regularization function have been respectively introduced to the
RMCC algorithm, leading to faster convergence under sparse systems. In [22, 23], the proportionate MCC (PMCC) and
hybrid-norm constrained PMCC algorithms were proposed for wide-sense sparse systems [24] and block sparse systems
[25], respectively.
In our recent work [26], the proportionate updating (PU) mechanism was introduced to the standard RLS, leading to the
proportionate recursive least squares (PRLS) algorithm with improved performance under sparse systems. This naturally
motivates us to incorporate a similar proportionate matrix to the existing RMCC to establish a proportionate recursive
maximum correntropy criterion (PRMCC) algorithm. According to the analysis in [27], the parameter θcontrolling the
trace of the proportionate matrix in the PRLS trades off between the initial convergence and steady-state performance.
Specifically, when θ≤N(Nis the filter length), the steady state of the PRLS is at least as good as the standard RLS.
However, its convergence rate may be slower. To achieve fast convergence and low steady-state error simultaneously, two
methods have been proposed for LMS-type algorithms: variable step approaches [28, 29, 30] and convex combination
[31, 32, 33, 34]. As θhas similar function as the step size in the LMS, we explore the feasibility of further improving the
PRMCC. In particular, we propose an adaptive convex combination of two PRMCC filters, leading to the combinational
PRMCC (CPRMCC) algorithm. Theoretical performance analysis of the PRMCC and CPRMCC are then provided. For
the PRMCC, its stability condition is first derived based on the Taylor expansion approach. Then, we investigate its steady-
state performance and tracking ability, based on which theoretically optimal parameter values can be determined. For the
CPRMCC, we prove its performance is at least not worse than the better one of its two components. Simulation results on
sparse channel estimation are presented to demonstrate the effectiveness of proposed algorithms.
The rest of this paper is organized as follows. In Section 2, we develop the PRMCC and CPRMCC algorithms. Section 3
provides the stability condition and analyzes the mean-square performance for the PRMCC in stationary and nonstationary
environments. In addition, the superiority of CPRMCC is verified. In Section 4, simulation results are presented and
Section 5 concludes the paper.
Notation: We use bold capital letters, boldface lowercase letters, and italic letters, e.g. A,a, and ato, respectively,
represent matrix, vector, and scalar quantities. The superscript, (·)T, denotes the transpose, and E[·]represents the statistical
2