Tucker-O-Minus Decomposition for Multi-view Tensor Subspace Clustering Yingcong Lu Yipeng Liu Zhen Long Zhangxin Chen Ce Zhu

2025-05-06 0 0 1.13MB 26 页 10玖币
侵权投诉
Tucker-O-Minus Decomposition for Multi-view
Tensor Subspace Clustering
Yingcong Lu, Yipeng Liu, Zhen Long, Zhangxin Chen, Ce Zhu
October 25, 2022
Abstract
With powerful ability to exploit latent structure of self-representation
information, different tensor decompositions have been employed into low
rank multi-view clustering (LRMVC) models for achieving significant per-
formance. However, current approaches suffer from a series of problems re-
lated to those tensor decomposition, such as the unbalanced matricization
scheme, rotation sensitivity, deficient correlations capture and so forth.
All these will lead to LRMVC having insufficient access to global informa-
tion, which is contrary to the target of multi-view clustering. To alleviate
these problems, we propose a new tensor decomposition called Tucker-
O-Minus Decomposition (TOMD) for multi-view clustering. Specifically,
based on the Tucker format, we additionally employ the O-minus struc-
ture, which consists of a circle with an efficient bridge linking two weekly
correlated factors. In this way, the core tensor in Tucker format is re-
placed by the O-minus architecture with a more balanced structure, and
the enhanced capacity of capturing the global low rank information will
be achieved. The proposed TOMD also provides more compact and pow-
erful representation abilities for the self-representation tensor, simultane-
ously. The alternating direction method of multipliers is used to solve the
proposed model TOMD-MVC. Numerical experiments on six benchmark
data sets demonstrate the superiority of our proposed method in terms of
F-score, precision, recall, normalized mutual information, adjusted rand
index, and accuracy.
1 Introduction
Tensor, a higher order generation of vector and matrix, provides a natural rep-
resentation for high order data. For example, ORL multi-view data set [54] is
a 3rd-order tensor XI×Cv×V(v= 1,· · · , V ), where Iis the number of samples,
Cvis the feature size of the v-th view, and Vis the number of views. Tensor de-
composition allows us to simultaneously explore all dimensions to obtain more
All the authors are with the School of Information and Communication Engineering,
University of Electronic Science and Technology of China (UESTC), Chengdu, 611731, China.
E-mail: yipengliu@uestc.edu.cn
1
arXiv:2210.12638v1 [cs.LG] 23 Oct 2022
latent information, which has attracted attention in a series of fields, e.g., image
processing [8,32], machine learning [15,16] and signal processing [1,5]. Tensor-
based multi-view clustering (MVC) is one of them, which separates multi-view
data into clusters by exploiting their latent information.
Most tensor MVC methods are based on the assumption that their self-
representation tensors are low rank [53]. For example, Chen et al. [7] combine
the low-rank tensor graph and the subspace clustering into a unified model in
multi-view clustering, which applies the Tucker decomposition [41,42] to ex-
plore the low-rank information of representation tensor. With the emergence of
a new type of decomposition for 3rd-order tensors [20,21], multiple multi-view
subspace clustering methods based on the tensor singular value decomposition
(t-SVD) have been proposed [45,46,58]. In addition, tensor networks provide a
more compact and flexible representation for higher order tensor than traditional
tensor decomposition [27,30,31,33]. In this way, Yu et al. [51] have proposed
a novel non-negative tensor ring (NTR) decomposition and graph-regularized
NTR (GNTR) for the non-negative multi-way representation learning with sat-
isfying performance in clustering tasks.
Although the aforementioned methods have achieved promising clustering
performance, there are still several problems. Since the Tucker rank is related to
the unbalanced mode-nunfolding matrices [2], capturing the global information
of the self-representation tensor by simply processing Tucker decomposition may
be difficult in LRMVC. Besides, the t-SVD suffers from rotation sensitivity and
the low rank information can not be fully discovered in the 3rd mode [10].
Thus, the t-SVD based methods always need to utilize the rotation operation of
the self-representation tensor to explore the correlations across different views.
Naturally, the acquisition of the correlations among samples will be inadequate.
Furthermore, tensor ring (TR) has shown better performance in exploring the
low-rank information with the well-balanced matricization scheme. However,
the interaction of neighboring modes in TR is stronger than that of two modes
separated by a large distance, whose correlations are also have been ignored.
Considering the above problems, we propose the Tucker-O-Minus decompo-
sition for multi-view subspace clustering. For employing the additional factors
and correlations to construct a more compact and balanced tensor network, a
simple way seems to replace the core factor of the Tucker structure with the
TR format. However, as we mentioned above, the TR architecture suffers from
a deficiency in exploring the correlations between the weakly-connected modes,
which will result in the loss of essential clustering information in LRMVC. To
this end, we propose the O-minus structure for LRMVC in the following three
considerations. Firstly, from the perspective of two-point correlations in high-
energy physics [3], appending moderate lines with effective vertexes between
two slightly correlated factors will strengthen their relationships. Similarly, we
try to add the ”bridge” with a tensor factor based on the TR structure to better
capture the low rank information. Nonetheless, more links will generate more
loops, which will cause huge difficulties in tensor contraction and the computa-
tional complexity burden of the tensor networks [9]. Simultaneously, since the
number of samples IVin LRMVC, compared with higher-order information
2
across different views, the information related to the correlations of instances
requires more connections to be fully discovered. Accordingly, to efficiently ex-
plore the low rank information in LRMVC, the correlations related to samples
(i.e., I1, and I3in Fig. 1) are further strengthened by the special ”bridge”. This
unique internal architecture is similar to ””, thus namely O-Minus. And the
whole architecture called Tucker-O-Minus decomposition is illustrated in Fig. 1.
In this way, the low rank based multi-view clustering problem can be success-
fully solved with satisfying results. The main contributions of this paper are
summarized as follows:
1. We propose the Tucker-O-Minus decomposition. Different from the exist-
ing tensor networks, it allows more valid interaction among nonadjacent
factors and obtains a better low rank representation for high-dimensional
data. Numerical experimental results on gray image reconstruction show
that the performance of TOMD in exploring low rank information is su-
perior to others.
2. We apply TOMD to a unified multi-view clustering framework. The pro-
posed model, namely TOMD-MVC, utilizes the TOMD to capture the
low-rank properties of self-representation tensor from multi-view data.
The alternating direction method of multipliers (ADMM) is applied to
solve the optimization model.
3. We conduct extensive experiments on six real-world multi-view data sets
to demonstrate the performance of our method. Compare with state-of-
the-art models, TOMD-MVC achieves highly competitive or even better
performance in terms of six evaluation metrics.
The remainder of this paper is organized as follows. Section 2gives the used
notations, mathematical backgrounds, and review the related works. The pro-
posed tensor network is presented in details in Section 3. Section 4gives the
multi-view clustering method based on the newly proposed tensor network. The
experimental results are demonstrated and analyzed in Section 5. Section 6
draws the conclusion finally.
2 Notations, Preliminaries, and Related Works
2.1 Notations
We give a brief introduction about notations and preliminaries in this section.
Throughout this paper, we use lower case letters, bold lower case letters, bold
upper case letters, and calligraphic letters to denote scalars, vectors, matrices
and tensors, respectively, e.g., a,a,Aand A. Some other frequently used
notations are listed in Table 1, where Ais a 3rd-order tensor and Ais a matrix.
3
Figure 1: The graphical illustration of TOMD for a 4th-order tensor X ∈
RI1×I2×I3×I4, where U(i)RIi×Ri(i= 1,· · · ,4) are factor matrices, G(n),
n= 1,· · · ,5, are the sub-tensors.
Table 1: Summary of notations in this paper.
Denotation Definition
AiAi=A(:,:, i)
ai1,i2,i3The (i1, i2, i3)-th entry of tensor A
tr(A) tr(A) = Piai,i
kAkFkAkF=qPi1i2a2
i1,i2
kAkkAk= maxi1i2|Ai1,i2|
kAk2,1kAk2,1=PjkA(:, j)k2
kAkFkAkF=qPi1i2i3|ai1,i2,i3|2
4
2.2 Preliminaries
Definition 1. (Mode-nunfolding) [22] For tensor A ∈ RI1×···×IN, its ma-
tricization along the n-th mode is denoted as A(n)RIn×I1I2···In1In+1···IN.
Definition 2. (n-unfolding) [9] Given an N-way tensor A ∈ RI1×···×IN, the
definition of its n-unfolding is expressed as AhniRI1···In×In+1···IN.
Definition 3. (Mode-nproduct) [22] The mode-nproduct of A ∈ RI1×···×IN
and matrix BRJ×Inis defined as
X=A ×nBRI1×···×In1×J×In+1×···×IN.(1)
Definition 4. (Tensor Network Contraction) [38] Given a tensor network
composed of Nsub-tensors {G(n)}(n= 1,· · · , N), then Gis denoted as the
tensor network contraction result of {G(n)}(n= 1,· · · , N), whose general math-
ematical equation can be written as
G=
N
X
n=1
Rn
1,Rn
2,···
X
rn
1,rn
2,···
{
D1,D2,···
X
d1,d2···
N
Y
n=1
G(n)({rn
1, rn
2· · · , d1, d2· · · })}
= TC({G(n)}N
n=1),
(2)
where {D1, D2,· · · } are geometrical indexes, each of which is shared normally
two tensors and will be contracted, R={R1
1, R1
2,· · · , RN
1, RN
2,· · · } are open
bounds, each of which only belongs to one tensor. After contracting all the geo-
metrical indexes, the Grepresents a P-th order tensor with Pthe total number
of the open indexes R.
2.3 Related Works
This section reviews some closely related works, including low-rank approxima-
tion based multi-view subspace clustering, and multi-view graph clustering.
2.3.1 Low-rank Approximation Based Multi-view Subspace Cluster-
ing
Subspace learning based multi-view clustering methods assume that each sample
can be represented as a linear combination of all the samples [40].
Chen et al. [7] utilize the Tucker decomposition to obtain the ”clean” repre-
sentation tensor from view specific matrices. He et al. [12] introduce a tensor-
based multi-linear multi-view clustering (MMC) method. Under the consid-
eration of Canonical Polyadic Decomposition (CPD) [19], the MMC is able
to explore the higher-order interaction information among the multiple views.
Recently, tensor singular value decomposition (t-SVD) based tensor nuclear
norm [20,21] shows satisfying performance in capturing low rank information
for 3rd-order tensor. Thus, many t-SVD based works [45,46,48] are proposed
for better capturing the high-order correlation hidden in 3rd-order tensor from
5
摘要:

Tucker-O-MinusDecompositionforMulti-viewTensorSubspaceClusteringYingcongLu,YipengLiu,ZhenLong,ZhangxinChen,CeZhu*October25,2022AbstractWithpowerfulabilitytoexploitlatentstructureofself-representationinformation,di erenttensordecompositionshavebeenemployedintolowrankmulti-viewclustering(LRMVC)modelsf...

展开>> 收起<<
Tucker-O-Minus Decomposition for Multi-view Tensor Subspace Clustering Yingcong Lu Yipeng Liu Zhen Long Zhangxin Chen Ce Zhu.pdf

共26页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!

相关推荐

分类:图书资源 价格:10玖币 属性:26 页 大小:1.13MB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 26
客服
关注