
The SLE problem has roots dating back to Prony’s method [4]. Methods based on statistical
signal models leveraging eigendecompositions of the signal covariance matrix were later pioneered
in [5] and subsequently rediscovered in [6]. Initially considered to be less proficient at SLE than
Prony’s method [7], this methodology would form the inspiration for the MUSIC algorithm [8].
Within a similar timeframe methods leveraging the rotational invariance of the signal subspace
to directly estimate the component frequencies were developed to form the ESPRIT algorithm [9].
Accompanying these statistical methods, deterministic Prony-like methods such as the matrix pencil
were later developed [10,11]. The generalized eigenvalue problem solved in the matrix pencil method
(with Prony’s methods enveloped as a special case) is more stable to noise than the standard root
finding process associated with Prony’s method [12]. The matrix pencil, MUSIC, and ESPRIT
algorithms form what we refer to as the “classical” SLE methods.
More recently, advances in sparse approximation has led to the creation of a more optimization
based perspective on SLE. Applying `1−`2optimization to source localization, specifically in
the context of arrays, was first presented in [13]. The work in [14,15] showed that when the
components signals reside on on-grid frequencies and have positive weightings the signals can be
perfectly recovered via `1-minimization. Sparsity constraints were also shown to yield meaningful
performance improvements in more practical radar based applications in [16].
Works such as [17–19] adapt compressed sensing recovery algorithms such as OMP, CoSaMP,
subspace pursuit, and `1-minimization to SLE when the observations in (1) are viewed through a
dimensionality reducing sensing matrix. The approach in each is similar, the component signals
are assumed to admit a sparse representation in an overcomplete discrete Fourier dictionary. To
overcome the dictionary’s coherence the elements are split into coherent frames, and the sources
can be coarsely localized to a subset of these frames. This approximation is then refined via a
local optimization step that each author handles differently. In a similar manner to compressed
sensing, the authors of [20] proposed a 2-D SLE technique posed in a structured matrix completion
framework. The process leverages the “matrix enhancement” method developed in [11] to yield
recovery guarantees in traditionally adverse scenarios. However, the paper is largely centered on
the matrix completion aspect of the problem and defers the SLE portion to the methods proposed
in [11,12].
A common theme amongst the above sparse approximation and compressed sensing SLE meth-
ods is the assumption that the signal is sparse in a finite dictionary. This precludes the scenario
where the underlying frequencies of the component signals lie off-grid. As previously discussed,
in the traditional SLE problem frequencies are permitted to lie on a continuum. The use of over-
complete dictionaries is meant to help circumvent this problem, but is ultimately avoiding the true
nature of the issue at hand.
TV/atomic norm minimization based methods2reconcile this issue, and are effectively a gen-
eralization of `1-minimization to a continuous setting [22]. Utilizing TV norm minimization, the
authors of [23] were able to extend the work of [14,15] to operate on a continuum of frequencies
(assuming a positive weighting of the sinusoids). Subsequently, in [24] it was shown that, under a
mild separation constraint, exact recovery of the component signal parameters could be achieved
in the absence of noise via TV norm minimization. In [25], the sequel to [24], strong theoretical
guarantees on the accuracy of reconstruction from noisy measurements were established for the
same atomic norm denoising framework presented in [26]. The work of [27] decoupled the support
and amplitude estimation errors and showed that the support estimation in atomic norm denoising
2In the context of SLE, the total variation (TV) norm and atomic norm are essentially equivalent [21].
4