Proportionate Recursive Maximum Correntropy Criterion Adaptive Filtering Algorithms and their Performance Analysis

2025-05-02 0 0 576.02KB 26 页 10玖币
侵权投诉
Proportionate Recursive Maximum Correntropy Criterion Adaptive Filtering
Algorithms and their Performance Analysis
Zhen Qina,b, Jun Taoc,, Le Yangd, Ming Jiange
aDepartment of Computer Science and Engineering, Ohio State University, Columbus, 43210, USA.
bSchool of Information Science and Engineering, Southeast University, Nanjing, 210096, China.
cKey Laboratory of Underwater Acoustic Signal Processing of the Ministry of Education,
School of Information Science and Engineering, Southeast University, Nanjing 210096, China.
dDepartment of ECE, University of Canterbury, Christchurch, 8020, New Zealand.
ePengcheng Lab, Shenzhen, 518066, China.
Abstract
The maximum correntropy criterion (MCC) has been employed to design outlier-robust adaptive filtering algorithms,
among which the recursive MCC (RMCC) algorithm is a typical one. Motivated by the success of our recently proposed
proportionate recursive least squares (PRLS) algorithm for sparse system identification, we propose to introduce the pro-
portionate updating (PU) mechanism into the RMCC, leading to two sparsity-aware RMCC algorithms: the proportionate
recursive MCC (PRMCC) algorithm and the combinational PRMCC (CPRMCC) algorithm. The CPRMCC is implemented
as an adaptive convex combination of two PRMCC filters. For PRMCC, its stability condition and mean-square perfor-
mance were analyzed. Based on the analysis, optimal parameter selection in nonstationary environments was obtained.
Performance study of CPRMCC was also provided and showed that the CPRMCC performs at least as well as the better
component PRMCC filter in steady state. Numerical simulations of sparse system identification corroborate the advantage
of proposed algorithms as well as the validity of theoretical analysis.
Keywords: Adaptive filter, maximum correntropy criterion, proportionate updating, sparse system.
1. Introduction
The minimum mean square error (MMSE) and least squares (LS) criteria, have been widely adopted in the design of
adaptive filtering algorithms, leading to e.g. the least mean squares (LMS) and recursive least squares (RLS) filters. These
classic algorithms have attained great success in many applications such as channel equalization [1], noise cancellation [2],
system modeling [3], just to name a few. However, the MMSE/LS criteria are known to be sensitive to outliers [4]. Outliers
are common in various contexts, such as sensor calibration [5] (due to sensor failure), face recognition [6] (owing to self-
shadowing, specularity, or brightness saturation), and others. In contrast, the emerging maximum correntropy criterion
(MCC) is more robust to outliers.
Corresponding authors.
Email addresses: qin.660@osu.edu (Zhen Qin), jtao@seu.edu.cn (Jun Tao), le.yang@cantebury.ac.nz (Le Yang),
jiangm@pcl.ac.cn (Ming Jiang)
Preprint submitted to Digital Signal Processing October 10, 2023
arXiv:2210.12311v2 [eess.SP] 8 Oct 2023
Many MCC adaptive filtering algorithms have been developed [7, 8, 9, 10, 11, 12, 13]. They can be classified into two
categories: stochastic gradient type algorithms [7, 8, 9] and recursive MCC (RMCC) type algorithms [10, 11, 12, 13]. The
design of RMCC is somewhat similar to that of the RLS. In [7], an MCC algorithm was proposed for the environment with
impulsive noise and its theoretical excess mean square error (MSE) was derived using the energy conservation principle [8].
In [9], a generalized Gaussian density (GGD) cost function was employed to replace the Gaussian kernel used in original
MCC algorithm, leading to a generalized maximum correntropy criterion (GMCC) algorithm. Compared with stochastic
gradient type algorithms, the RMCC type algorithms can achieve superlinear convergence [10, 11, 12].
Motivated by the success of sparsity-aware MMSE/LS adaptive filtering algorithms [14, 15, 16, 17, 18], the design of
sparse MCC adaptive filtering algorithms have recently attracted great attentions [19, 20, 21, 22, 23]. In [19], a correntropy
induced metric (CIM) MCC (CIMMCC) algorithm has been designed by exploiting the CIM to approximate the l0norm
in the cost function. In [20, 21], the l1norm and a general regularization function have been respectively introduced to the
RMCC algorithm, leading to faster convergence under sparse systems. In [22, 23], the proportionate MCC (PMCC) and
hybrid-norm constrained PMCC algorithms were proposed for wide-sense sparse systems [24] and block sparse systems
[25], respectively.
In our recent work [26], the proportionate updating (PU) mechanism was introduced to the standard RLS, leading to the
proportionate recursive least squares (PRLS) algorithm with improved performance under sparse systems. This naturally
motivates us to incorporate a similar proportionate matrix to the existing RMCC to establish a proportionate recursive
maximum correntropy criterion (PRMCC) algorithm. According to the analysis in [27], the parameter θcontrolling the
trace of the proportionate matrix in the PRLS trades off between the initial convergence and steady-state performance.
Specifically, when θN(Nis the filter length), the steady state of the PRLS is at least as good as the standard RLS.
However, its convergence rate may be slower. To achieve fast convergence and low steady-state error simultaneously, two
methods have been proposed for LMS-type algorithms: variable step approaches [28, 29, 30] and convex combination
[31, 32, 33, 34]. As θhas similar function as the step size in the LMS, we explore the feasibility of further improving the
PRMCC. In particular, we propose an adaptive convex combination of two PRMCC filters, leading to the combinational
PRMCC (CPRMCC) algorithm. Theoretical performance analysis of the PRMCC and CPRMCC are then provided. For
the PRMCC, its stability condition is first derived based on the Taylor expansion approach. Then, we investigate its steady-
state performance and tracking ability, based on which theoretically optimal parameter values can be determined. For the
CPRMCC, we prove its performance is at least not worse than the better one of its two components. Simulation results on
sparse channel estimation are presented to demonstrate the effectiveness of proposed algorithms.
The rest of this paper is organized as follows. In Section 2, we develop the PRMCC and CPRMCC algorithms. Section 3
provides the stability condition and analyzes the mean-square performance for the PRMCC in stationary and nonstationary
environments. In addition, the superiority of CPRMCC is verified. In Section 4, simulation results are presented and
Section 5 concludes the paper.
Notation: We use bold capital letters, boldface lowercase letters, and italic letters, e.g. A,a, and ato, respectively,
represent matrix, vector, and scalar quantities. The superscript, (·)T, denotes the transpose, and E[·]represents the statistical
2
expectation. For a vector aof size N×1, its ln-norm is defined as ||a||n= (PN
m=1 |am|n)1
n. The tr(A)returns the trace
of a matrix A.
2. Sparsity-aware RMCC Adaptive Filtering Algorithms
Consider a standard system identification setting in the real domain with the following input-output relationship
d(n) = wT
NxN(n) + v(n),(1)
where wN= [w1, w2,··· , wN]Tis the N×1system impulse response vector to be estimated, and xN(n)=[x(n), x(n
1),··· , x(nN+ 1)]Tdenotes the input vector at time instant n. The d(n)and v(n)represent the system output and the
additive noise.
Given two random variables Xand Y, the correntropy is defined as [35]
V(X, Y ) = E[κ(X, Y )] = Zκ(x, y)dFXY (x, y),(2)
where κ(·,·)is a shift-invariant Mercer kernel, and FXY (x, y)denotes the joint distribution function of (X, Y ). Unless
otherwise specified, the following Gaussian kernel is used in this work:
κ(x, y) = Gσ(e) = 1
2πσ exp(e2
2σ2),(3)
where e=xyand σ > 0is the kernel bandwidth. The correntropy measures how similar two random variables are
within a neighborhood controlled by the kernel width σ. Specifically, the smaller the value of σ, the better it can eliminate
the impact caused by outliers. While σ→ ∞, such a metric will reduce to one global measure, i.e., mean square error
(MSE).
The RMCC employs the following cost function
JRMCC(n) =
n
X
i=1
λniexp(e2(i|n)
2σ2),(4)
where 0<λ<1is a forgetting factor and
e(i|n) = d(i)wT
N(n)xN(i).(5)
The weight vector wN(n)=[w1(n), w2(n),··· , wN(n)]Tdenotes an estimate of wNat time n. According to [21], the
weight update of RMCC is governed by
wN(n) = wN(n1) + k(n)f(e(n|n1)),(6)
3
where
f(e(n|n1)) = exp(e2(n|n1)
2σ2)e(n|n1),(7)
with the a priori error being e(n|n1) = d(n)wT
N(n1)xN(n), and the (Kalman) gain vector k(n)is
k(n) = P(n1)xN(n)
λ+ exp(e2(n|n1)
2σ2)xT
N(n)P(n1)xN(n).(8)
The matrix P(n1) can be iteratively computed using
P(n1) = λ1P(n2) exp(e2(n1|n1)
2σ2)k(n1)xT
N(n1)P(n2),(9)
so that (8) can be rewritten as
k(n) = λ1P(n1) exp(e2(n|n1)
2σ2)k(n)xT
N(n)P(n1)xN(n) = P(n)xN(n).(10)
2.1. The PRMCC Algorithm
Motivated by the PRLS from [26], an N×Nproportionate matrix G(n1) is introduced into the update of (6), leading
to the PRMCC algorithm within the following update rule:
wN(n)=wN(n1) + G(n1)k(n)f(e(n|n1)),(11)
where G(n1) = diag{g1(n1), g2(n1), . . . , gN(n1)}with the k-th diagonal element given by
gk(n1) = θ(1 α)
2N+ (1 + α)|wk(n1)|
2||wN(n1)||1+ϵ.(12)
Here, ϵis a small positive constant, α[1,1), and θ > 0is the trace controller. The parameter θtrades off the
convergence and steady-state behaviors of the PRMCC. It is noted as σ→ ∞ in (7), the PRMCC algorithm reduces to the
PRLS algorithm.
The implementation of PRMCC algorithm is summarized in Algorithm 1.
2.2. The CPRMCC Algorithm
As mentioned before, the convergence and steady-state behaviors of the PRMCC is traded off by the parameter θin the
proportionate matrix. To simultaneously achieve fast convergence and good steady-state performance, we further propose
the CPRMCC algorithm, which is indeed an adaptive convex combination of two PRMCC filters with two different trace
4
Algorithm 1 The PRMCC Algorithm
Initialization: P(0) = δ1IN,wN(0) = 0N×1,λ,δ,ϵ,θ,αand σare constants;
Iteration: For n= 1,2,···
e(n|n1) = d(n)wT
N(n1)xN(n)
k(n) = P(n1)xN(n)
λ+exp(e2(n|n1)
2σ2)xT
N(n)P(n1)xN(n)
gk(n1) = θ(1α)
2N+θ(1 + α)|wk(n1)|
2||wN(n1)||1+ϵ, k = 1,2,...N
G(n1) = diag{g1(n1), g2(n1), ..., gN(n1)}
f(e(n|n1)) = exp(e2(n|n1)
2σ2)e(n|n1)
wN(n) = wN(n1) + G(n1)k(n)f(e(n|n1))
P(n) = λ1[P(n1) exp(e2(n|n1)
2σ2)k(n)xT
N(n)P(n1)]
controllers. The output of the CPRMCC is given as
y(n) = ρ(n)wT
1(n)xN(n) + (1 ρ(n))wT
2(n)xN(n)
=wT
N(n)xN(n),
(13)
where w1(n)and w2(n)are updated as (11) with trace controllers being set to θ1and θ2. The overall weight estimate is
then
wN(n) = ρ(n)w1(n) + (1ρ(n))w2(n).(14)
The mixing parameter ρ(n)is defined as a sigmoidal function
ρ(n) = sgm[b(n1)] = 1
1 + exp(b(n1)),(15)
where b(n1) is iteratively updated via [34]
b(n1) = b(n2) + µbexp(e2(n1|n1)
2σ2
b
)e(n1|n1)(y1(n1)y2(n1))ρ(n1)(1 ρ(n1))b+
b+
,(16)
in which
y1(n) = wT
1(n)xN(n)and y2(n) = wT
2(n)xN(n),(17)
and µband σbare, respectively, the step size and kernel bandwidth. The values of b(n1) has been limited to the interval
[b+, b+][31, 32, 33] to avoid possible pre-mature termination of the update when ρ(n1) is close to zero or one.
The detailed implementation of CPRMCC algorithm is summarized in Algorithm 2.
The CPRMCC can be further improved by speeding up the convergence of the slower component PRMCC filter via the
5
摘要:

ProportionateRecursiveMaximumCorrentropyCriterionAdaptiveFilteringAlgorithmsandtheirPerformanceAnalysisZhenQina,b,JunTaoc,∗,LeYangd,MingJiangeaDepartmentofComputerScienceandEngineering,OhioStateUniversity,Columbus,43210,USA.bSchoolofInformationScienceandEngineering,SoutheastUniversity,Nanjing,210096...

展开>> 收起<<
Proportionate Recursive Maximum Correntropy Criterion Adaptive Filtering Algorithms and their Performance Analysis.pdf

共26页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:26 页 大小:576.02KB 格式:PDF 时间:2025-05-02

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 26
客服
关注