
A UNITARY TRANSFORM BASED GENERALIZED APPROXIMATE MESSAGE PASSING
Jiang Zhu‡Xiangming Meng?Xupeng Lei‡Qinghua Guo†
‡Ocean College, Zhejiang University, Zhoushan, 316021, China
?Institute for Physics of Intelligence, The University of Tokyo, 7-3-1, Hongo, Tokyo 113-0033, Japan
†SECTE, University of Wollongong, NSW, 2522, Australia
ABSTRACT
We consider the problem of recovering an unknown sig-
nal x∈Rnfrom general nonlinear measurements ob-
tained through a generalized linear model (GLM), i.e., y=
f(Ax +w), where f(·)is a componentwise nonlinear func-
tion. Based on the unitary transform approximate message
passing (UAMP) and expectation propagation, a unitary
transform based generalized approximate message passing
(GUAMP) algorithm is proposed for general measurement
matrices A, in particular highly correlated matrices. Experi-
mental results on quantized compressed sensing demonstrate
that the proposed GUAMP significantly outperforms state-of-
the-art GAMP and GVAMP under correlated matrices A.
Index Terms—GLM, AMP, GAMP, message passing,
quantized compressed sensing
1. INTRODUCTION
We consider the general problem of inference on generalized
linear models (GLM) [1], i.e., recovering an unknown signal
x∈Rnfrom a noisy linear transform followed by compo-
nentwise nonlinear measurements
y=f(Ax +w),(1)
where A∈Rm×nis a known linear mixing matrix, w∼
N(w; 0, σ2Im)is an i.i.d. Gaussian noise with known vari-
ance, and f(·) : Rm×1→ Qm×1is an componentwise non-
linear link function. Denote z,Ax ∈Rmas the hidden
linear transform outputs, the componentwise nonlinear func-
tion f(·)can be equivalently described in a probabilistic way
using a fully factorized likelihood function as follows
p(y|z) =
m
Y
i=1
p(yi|zi),(2)
where p(yi|zi)is determined by the specific nonlinear func-
tion f(·). The prior distribution p(x)of xis also assumed to
be fully factorized p(x) =
n
Q
j=1
p(xj)for simplicity. GLM in-
ference has wide applications in science and engineering such
as wireless communications, signal processing, and machine
Corresponding author. E-mail: meng@g.ecc.u-tokyo.ac.jp. This work is
supported by NSFC under grant 61901415.
learning. In the special case of identity function f(·), the non-
linear GLM in (1) will reduce to the popular standard linear
model (SLM) as follows
y=Ax +w.(3)
Thus, GLM is actually an extention of SLM from linear mea-
surements to nonlinear measurements, which are prevalent in
some real-world applications such as quantized compressed
sensing, pattern classification, phase retrieval, etc.
A variety of algorithms have been proposed for inference
over GLMs and SLMs. Among them, the past decade has wit-
nessed an advent of one distinguished family of probabilistic
algorithms called message passing algorithm. Among them,
the most famous ones are the approximate message passing
(AMP) algorithm [2, 3] for SLM and generalized approxi-
mate message passing (GAMP) [4] for GLM, which have
been proved to be optimal under i.i.d. Gaussian matrices A.
However, both AMP and GAMP diverge for general A. In [5–
7], the GAMP is incorporated into the sparse Bayesian learn-
ing (SBL) to improve the robustness of GAMP and reduce the
computation complexity of SBL. Vector AMP (VAMP) [8]
(or OAMP [9], one similar algorithm to VAMP) and gener-
alized VAMP (GVAMP) [10] have been proposed to improve
the performance with general Afor SLM and GLM, respec-
tively. The VAMP is derived from expectation propagation
(EP) [11], another powerful message passing algorithm. Re-
markably, it has been demonstrated in [12, 13] that all the
AMP, GAMP, and GVAMP, can be derived concisely as spe-
cial instances of EP under different assumptions and thus be
unified within a single EP framework. Despite significant im-
provement in robustness, in particular for right-orthogonally
invariant matrices, VAMP and GVAMP still suffer poor con-
vergence in some more challenging scenarios, e.g., the mea-
surement matrix is highly correlated [14, 15].
In this work, we focus on the AMP variant: unitary
approximate message passing (UAMP) [16], which was for-
merly called UTAMP [16–18]. The motivation behind UAMP
is this: since AMP has proven to work well when the elements
of Ain (3) are uncorrelated, one can artificially constructs
such a “good” and equivalent measurement model simply
by performing a singular value decomposition (SVD) on
A=UΣVT, where U∈Rm×r,Σ∈Rr×r,V∈Rn×r
arXiv:2210.08861v1 [cs.IT] 17 Oct 2022