
Super-resolved imaging based on spatiotemporal wavefront shaping
Guillaume Noetinger,1Samuel M´etais,2Geoffroy Lerosey,3
Mathias Fink,1S´ebastien M. Popoff,1and Fabrice Lemoult1
1Institut Langevin, ESPCI Paris, Universit´e PSL, CNRS, 75005 Paris, France
2Aix Marseille Universit´e, CNRS, Centrale Marseille, Institut Fresnel, Marseille, France
3Greenerwave, 75002 Paris, France
(Dated: October 24, 2022)
A novel approach to improving the performances of confocal scanning imaging is proposed. We
experimentally demonstrate its feasibility using acoustic waves. It relies on a new way to encode
spatial information using the temporal dimension. By moving an emitter, used to insonify an object,
along a circular path, we create a temporally modulated wavefield. Due to the cylindrical symmetry
of the problem and its temporal periodicity, the spatiotemporal input field can be decomposed
into harmonics corresponding to different spatial vortices, or topological charges. Acquiring the
back-reflected waves with receivers which are also rotating, multiple images of the same object
with different Point Spread Functions (PSFs) are obtained. Not only is the resolution improved
compared to a standard confocal configuration, but the accumulation of information also allows
building images beating the diffraction limit. The topological robustness of the approach promises
good performances in real life conditions.
Imaging devices exploit waves to retrieve information
about an object. Ultrasound imaging devices or optical
microscopes are two widespread commercially-available
technologies relying on different waves, both offering
key insights for medical diagnosis and scientific research.
While having different properties, those approaches rely
on the same principles, and their resolution is limited
by the same diffraction effects to a distance of the order
of the wavelength. More specifically, in optical full-field
microscopy, a sample is uniformly illuminated and the
waves scattered off an object are collected with a micro-
scope objective. The finite aperture of the optical system
filters out waves corresponding to high scattering angles
and thus the smaller details [1]. The image of a point,
i.e. the Point-Spread Function (PSF) of the system, is
an Airy spot whose first zero is located at a distance
1.22 λ
2NA from the focus (where λis the operating wave-
length and NA the numerical aperture). The Rayleigh
criterion states that two point-like objects closer than
this distance cannot be distinguished [2].
Overcoming this limitation is the domain of super-
resolution imaging and many techniques have already
been proposed [3–12]. As any label-free imaging scheme
can be decomposed in two main steps, i.e. acquiring data
and processing the acquired data, those techniques can
be divided in two different classes.
The first strategy consists in shaping the illumination
and/or the collection of waves. The simplest implementa-
tion is the well-known confocal microscope [13, 14], where
both the illumination and the detection correspond to a
diffraction-limited volume. In addition to suppressing
out-of-focus signals, it enhances the maximum transmit-
ted spatial frequency by a factor of two [15]. Neverthe-
less, such improvement is difficult to observe experimen-
tally due to a poor gain for high spatial frequencies. The
increase in lateral resolution is usually considered to be
on the order of roughly 40%. Other approaches were pro-
posed, all relying on engineering the illumination or the
collection scheme, such as diffractive tomography [16, 17],
ptychography [18], structured illumination [19] or other
PSF-engineering [20, 21]. However, all these techniques
remain ultimately limited in resolution by the diffraction.
The second strategy consists in developing algorithms
for reconstructing the image. Indeed, the Rayleigh cri-
terion is somewhat an arbitrary rule. In particular, it
assumes any spatial frequency outside of the transmitted
bandwidth is definitely lost. Nonetheless, strong argu-
ments exist to claim that the resolution can be limited
only by the signal-to-noise ratio [22, 23]. However, the
presence of noise in real life experiments makes the prob-
lem ill-posed, forbidding a direct inversion of the mathe-
matical equations. Various algorithms were developed to
regularize the system by compensating for unknown or
noisy information using strong priors [24–27].
In this letter, we propose an approach combining wave-
front shaping and mathematical deconvolution for im-
proving the resolution. It consists in using dynamic wave-
front shaping to rotate an illumination wavefront, thus
using the temporal domain as an additional way to en-
code information. We show that it is equivalent to mea-
suring the image of an object with multiple imaging sys-
tems with different orthogonal PSFs, corresponding to
different harmonics of the received signals. The addition
of information provided by those images allows increas-
ing the effective signal to noise ratio and thus improving
the resolution. We experimentally demonstrate this con-
cept in acoustics in a confocal-like configuration by re-
constructing the image of two small scatterers. Using a
simple deconvolution process we show that two scatterers
can be distinguished below the diffraction limit.
The fundamental idea of the proposal relied on creating
a singularity that would allow discriminating precisely
arXiv:2210.12010v1 [physics.optics] 21 Oct 2022