of Moore’s law (Waldrop, 2016; Shalf, 2020; Leiserson et al., 2020), conven-
tional digital von Neumann computers and clock-driven sensor systems face
considerable hurdles regarding bandwidth and computational efficiency. For
example, the gap between the computational requirements for training state-of-
the-art deep learning models and the capacity of the underlying hardware has
grown exponentially during the last decade (Mehonic and Kenyon, 2022). Mean-
while, in stark contrast, distributed digitized systems—ever-growing in size and
complexity—require increasing computational efficiency for AI applications at
the resource-constrained edge of the internet (Zhou et al., 2019; Ye et al., 2021),
where sensors are generating increasingly unmanageable amounts of data.
One approach to addressing this lack of computational capacity and effi-
ciency is offered by neuromorphic engineering (Mead, 1990, 2020). There, in-
spiration is drawn from the most efficient information processing systems known
to humanity—brains—for the design of hardware systems for sensing (Tayarani-
Najaran and Schmuker, 2021) and processing (Zhang et al., 2020a; Basu et al.,
2022) that have the potential to drive the next wave of computational technology
and artificial intelligence (Christensen et al., 2022; Frenkel et al., 2021; Shrestha
et al., 2022). Neuromorphic—that is, brain-like—computing systems imitate the
brain at the level of organizational principles (Indiveri and Liu, 2015), and often
also at the level of device physics by leveraging nonlinear phenomena in semi-
conductors (Chicca et al., 2014; Rubino et al., 2021) and other nanoscale devices
(Zidan et al., 2018; Markovi´c et al., 2020) for non-digital computation. The idea
of using nonlinear physical phenomena for non-digital computing has been ex-
plored for decades. Different choices of underlying mathematical models lead
to different definitions of what the concept of “computation” entails (Jaeger,
2021), and likely also influences the set of possible emergent innovations.
Here, we define neuromorphic computing (NC) systems as non–von Neu-
mann information-processing systems, the structure and function of which ei-
ther emulate or simulate the neuronal dynamics of brains—especially of somas,
but sometimes also synapses, dendrites, and axons—typically in the form of
spiking neural networks (SNNs) (Maass, 1997; Nunes et al., 2022; Wang et al.,
2022). NC systems open up new algorithmic spaces—through asynchronous
massive parallelism, sparse, event-driven activity, and co-location of memory
and processing (Indiveri and Liu, 2015)—and, in terms of energy-usage and la-
tency, offer superior solutions to a range of brain-like computational problems
(Davies et al., 2021; Yin et al., 2021; St¨ockl and Maass, 2021; G¨oltz et al.,
2021; Rao et al., 2022). Furthermore, beyond cognitive applications, SNNs and
NC systems have also demonstrated potential for applications such as graph al-
gorithms, constrained optimization, random walks, partial-differential-equation
solving, signal processing, and algorithm composition (Aimone et al., 2022).
Consequently, there is a growing interest for NC technology within applica-
tion domains such as automotive technology, digitized industrial production
and monitoring, mobile devices, robotics, biosensing (such as brain–machine in-
terfaces and wearables), prosthetics, telecommunications-network (5G/6G) op-
timization, and space technology.
One challenge facing neuromorphic technology is that of integrating emerg-
ing diverse hardware systems, such as neuromorphic processors and quantum
computers, into a common computational environment (Vetter et al., 2018).
Such hardware systems are—due to performance constraints of existing compu-
tational hardware in, for instance, energy usage or processing speed—likely to
2