Robust Singular Values based on L1-norm PCA Duc H. Lezand Panos P. Markopoulosy zDept. of Electr. Microelectron. Engineering Rochester Institute of Technology Rochester NY

2025-05-03 0 0 579.34KB 12 页 10玖币
侵权投诉
Robust Singular Values based on L1-norm PCA
Duc H. Leand Panos P. Markopoulos†∗
Dept. of Electr. & Microelectron. Engineering,Rochester Institute of Technology, Rochester, NY
E-mail: dhl3772@rit.edu
Depts. of Electr. & Comput. Engineering and Comput. Science,The University of Texas at San Antonio, San Antonio, TX
E-mail: panos@utsa.edu
Abstract
Singular-Value Decomposition (SVD) is a ubiquitous data analysis method in engineering, science, and statistics. Singular-
value estimation, in particular, is of critical importance in an array of engineering applications, such as channel estimation in
communication systems, electromyography signal analysis, and image compression, to name just a few. Conventional SVD of
a data matrix coincides with standard Principal-Component Analysis (PCA). The L2-norm (sum of squared values) formulation
of PCA promotes peripheral data points and, thus, makes PCA sensitive against outliers. Naturally, SVD inherits this outlier
sensitivity. In this work, we present a novel robust non-parametric method for SVD and singular-value estimation based on a
L1-norm (sum of absolute values) formulation, which we name L1-cSVD. Accordingly, the proposed method demonstrates sturdy
resistance against outliers and can facilitate more reliable data analysis and processing in a wide range of engineering applications.
Index Terms
Singular value decomposition, principal component analysis, subspace signal processing, outliers.
I. INTRODUCTION
Singular-Value Decomposition (SVD) has established itself as a powerful tool, ubiquitous in various engineering applications.
For example, applying SVD to the channel matrix of a multiple-input multiple-output (MIMO) channel decomposes the MIMO
channel into multiple single-input single-output (SISO) channels with gains corresponding to singular values, which enables
efficient power allocation and channel capacity estimation [1], [2]. Furthermore, SVD has been extensively employed in various
watermarking schemes [3], [4], direction-of-arrival (DOA) estimation [5], [6], restructuring of deep neural network acoustic
models [7], electromyography (EMG) signal analysis [8], etc.
Another data analysis method that is closely related to SVD is Principal-Component Analysis (PCA), which is also useful
in a number of fields, such as machine learning, signal processing, and pattern recognition [9], [10], [11]. The traditional
PCA method seeks to maximize the L2-norm of the variance of the projected coordinates on the principal components (PCs).
However, because of its emphasis on the square of the coordinates, PCA is sensitive against corruption from gross and sparse
outliers.
Corresponding author.
arXiv:2210.12097v1 [eess.SP] 21 Oct 2022
Thus, there has been considerable research effort in reformulating PCA employing the L1-norm instead (L1-PCA), which is
able to suppress the effect of extreme data points [12], [13]. The exact solution to L1-PCA can be obtained in polynomial time
with respect to the number of data points [14]. However, optimality can be traded for lower computational complexity, which
has been implemented in a greedy approach [15], a semidefinite programming approach [16], an alternating algorithm [17],
and a bit-flipping algorithm [18], just to name a few. Apart from L1-PCA, another line of research that aims to ameliorate the
effect of outliers corruption is Robust Principal-Component Analysis (RPCA) [19], which strives to decompose a matrix into
a sparse and a low-rank component.
On the other hand, besides the PCs, or equivalently the left singular vectors of SVD [20], a robust, outlier-resistant acquisition
of singular values (SVs) is also of great interest. Regrettably, L1-PCs, while robust against outliers, do not possess the attractive
property of their L2-norm counterpart to diagonalize the data matrix X[21], thus making the extension from L1-PCA to SVs
estimation non-trivial.
In this paper, we leverage the robustness of the subspace found by previous L1-PCA algorithms to apply to SVs estimation.
To this end, we propose an algorithm named L1-cSVD that finds SVs and right singular vectors from a given L1-subspace
by solving a re-orthogonalization problem. We then test the performance of this L1-cSVD algorithm with synthetic and real
dataset against the state-of-the-art RPCA, which corroborates the robustness of the proposed algorithm in preserving SVs when
facing outliers.
II. TECHNICAL BACKGROUND
A. Standard SVD and PCA
SVD decomposes a D×Nmatrix Xas [20]
X=UΣVT,(1)
where URD×dand VRN×dare orthonormal matrices, defined as the left and right singular vectors respectively,
ΣRd×dis a positive-valued diagonal matrix whose diagonal elements are the singular values (SVs), and d= rank(X).
This is the “compact” SVD (cSVD) where the left and right singular vectors corresponding to zero SVs are disregarded [20].
For simplicity, we will refer to “compact” SVD as SVD throughout this paper.
Standard SVD is very closely related to PCA, since the first K(Kd)left singular vectors of Uare also the first KPCs
of X, maximizing the L2-norm projection [9]
QL2= argmax
QSD×K||QTX||2,2,(2)
where SD×Kdenotes the set of orthonormal matrices in RD×K(Stiefel manifold) and ||·||2,2denotes the Frobenius or L2-norm
of its matrix argument [20].
B. L1-PCA
The proposed method builds on top of L1-PCA, which is mathematically formulated by replacing the L2-norm in the
optimization problem of (2) with the L1-norm (sum of absolute values), as
QL1= argmax
QSD×K||QTX||1,1.(3)
L1-PCA can be extended to robust SVs estimation by taking the standard SVD of the projected matrix QL1QT
L1X. The optimal
solution to (3) was presented for the first time in [14] and has polynomial cost in N. In this work, we focus on suboptimal
approaches with lower time complexity.
1) Greedy Algorithm with Successive Nullspace Projection: Kwak [15] proposed an iterative algorithm to solve (3) when
K= 1. The algorithm can be summarized as
b(t)= sgn XTXb(t1),(4)
t= 2,3,4, ..., where b(1) ⊂ {±1}Nis an antipodal binary vector that can be randomly initialized. Then, the PC can be
approximated to be q=Xb/||Xb||2. For K > 1, the PCs of Qare found in a greedy way, by replacing Xwith its projection
onto the nullspace of the previously found PCs. It is important to note that because L1-PCA is not scalable, meaning the PCs
themselves are dependent on the number of PCs being found, the greedy approach is suboptimal.
2) Iterative Alternating Algorithm: Nie et al. [17] presented a method that finds the column vectors of Qjointly. The
iterative algorithm can be summarized as
B(t)= sgn(XTQ(t1)),Q(t)=U(XB(t)),(5)
t= 2,3,4, ..., where U(·)returns the closest orthonormal matrix using the Procrustes theorem [20] and B(1) ⊂ {±1}N×Kis
a binary matrix that can be arbitrarily initialized.
3) Bit-flipping Algorithm: Markopoulos et al. [18] proposed an algorithm that calculates the effect of flipping any single
bit of the binary matrix Bon the optimization metric of (3) and flips the bit that yields the highest increase to the metric.
The algorithm converges to the optimal L1-PCs with high frequency and frequently achieves higher value in the optimization
metric of (3) than previous alternatives.
C. RPCA
Another line of research to the problem of robustly recovering a low-rank structure from a corrupted matrix is RPCA [19].
This approach seeks to decompose a matrix Xinto a low-rank component Land a sparse component Sthat models sparse
outliers by solving the problem
minimize
L,S,L+S=X||L||+λ||S||1,1,(6)
摘要:

RobustSingularValuesbasedonL1-normPCADucH.LezandPanosP.MarkopoulosyzDept.ofElectr.&Microelectron.Engineering,RochesterInstituteofTechnology,Rochester,NYE-mail:dhl3772@rit.eduyDepts.ofElectr.&Comput.EngineeringandComput.Science,TheUniversityofTexasatSanAntonio,SanAntonio,TXE-mail:panos@utsa.eduAbstr...

展开>> 收起<<
Robust Singular Values based on L1-norm PCA Duc H. Lezand Panos P. Markopoulosy zDept. of Electr. Microelectron. Engineering Rochester Institute of Technology Rochester NY.pdf

共12页,预览3页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:12 页 大小:579.34KB 格式:PDF 时间:2025-05-03

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 12
客服
关注