
Recently, deep generative priors [14–18] which assume the signal is within the range of a generative
network are proposed to replace hand-crafted priors. Among them, trained generative networks [14–16]
build priors by learning the distribution information of signal from massive amounts of training data,
which achieve superior results. However, in many scenarios we can’t obtain sufficient training data, and
there may even be cases where the reconstruction performance of network is poor due to inconsistent
data distribution, so it’s necessary to introduce untrained generative priors. The untrained generative
networks such as Deep Image Prior (DIP) [19] and its variant Deep Decoder [20] are proposed, they
don’t rely on any external data to learn the distribution information of the signal, because their
structure can capture the low-level statistical priors of images implicitly [19].
Although a variety of excellent PR models have been proposed, designing efficient algorithms to
solve these models is still challenging. Algorithms to solve PR problem can be divided into classical
algorithms and learning-based algorithms. The earliest classical methods are based on alternating
projections, such as Gerchberg-Saxton (GS) [21] and its variant, hybrid-input-output (HIO) [22]. The
algorithms based on gradient descent are proposed later, such as Wirtinger Flow (WF) [23], Truncated
Wirtinger Flow (TWF) [24] and related variants [25]. In addition, there are also algorithms based on
second-order derivatives [26], which are faster than WF. Most of the classical algorithms are based on
the above algorithms or their variants.
With the development of neural networks, learning-based algorithms gain a lot of momentum.
There are some algorithms which take deep neural networks as denoisers in the iterative process of
PR [27;28], and other algorithms utilize deep neural networks to learn a mapping that reconstructs
the images from measurements [29;30]. Deep generative networks as image priors have been recently
introduced into PR problem. There are some algorithms [14–16] that use trained or pre-trained
generative networks to solve the Fourier or Gaussian PR problems (which replaces Fourier matrix
Fwith a Gaussian random matrix). However, it’s inevitable that their reconstruction performance
depends on massive amounts of training data.
The algorithms [17;31] based on untrained generative priors named Net-GD and Net-PGD have
been proposed recently and applied to Gaussian PR, they adopt gradient descent and projected gradient
descent respectively. Even though there have been many researches about Gaussian PR, FPR is more
challenging [32] and applicable in a wide range of scenarios. When Net-GD and Net-PGD are applied to
FPR problem, their reconstruction performance and robustness are not ideal, even when the sampling
rates are high. In summary, the application of untrained generative networks to FPR problems remains
to be further explored, including theoretical guarantees and algorithms that can be adopted at low
sampling rates.
This paper proposes an algorithm named Net-ADM that combines the alternating direction method
of multipliers (ADMM) with untrained generative network to tackle FPR problem. Specifically, we
smooth the amplitude-based loss function (1) and improve the dimension of the variable, so that
the objective function is Lipschitz differential and the Fourier transform can be calculated by fast
Fourier transform (FFT) [33]. Then the FPR problem is modeled as an constrained optimization
problem and ADMM is adopted to solve the optimization problem alternately. Additionally, the
untrained generative network is embedded into the iterative process to project an estimated signal of
the ADMM into the generative space that captures some image characteristics, the projected signal can
be applied to next iteration of ADMM. We theoretically analyze the two projections included in Net-
ADM algorithm, one makes the objective function descent, and the other makes the estimation closer
to the optimal solution, these good properties may be the reason for the gain compared to ADMM.
Numerical experiments show that the reconstruction performance and robustness of Net-ADM are
superior as compared to prior works, especially when the sampling rates are low.
The remainder of the paper is organized as follows. In section 2, we establish the FPR model based
on untrained generative prior, describe the Net-ADM algorithm and give relevant theoretical analysis.
Section 3describes the experimental settings, conducts numerical experiments under different sampling
rates and noise levels, the effect of parameters in the algorithm is discussed finally. In section 4, some
conclusions are drawn and new ideas are proposed for future work.
In this paper, the bold lowercase letters or numbers represent vectors, calligraphic uppercase letters
denote operators and the uppercase letters on the blackboard bold represent spaces or sets. √·,| · |
and ÷are element-wised operators. k·kis the Euclidean norm of a vector or the spectral norm of
a matrix. Hadamard product is represented by . Vectorization of a matrix is written as vec(·).
(·)[n+1:m]denotes a sub-vector constructed by the last m−nelements.
2