
synapses in a human brain, and each one is stochastic:
its probability of successfully transmitting a signal to a
downstream neuron ranges from 0.1 to 0.9 [20]. Each
synapse is activated about once per second on average,
so, the brain generates about
1015
random numbers
per second [22]. Compare this to the reliability of
transistor switching in conventional computers, where
the probability of failure is less than
10−14
[20]. It is
unknown precisely how brains deal with this stochas-
ticity, but its pervasiveness strongly suggests that the
brain uses its own randomness as a computational
resource rather than treating it as a defect that must be
eliminated. This suggests that a new class of parallel
computing architectures could emerge from combining
the computational principles of natural brains with
physical sources of intrinsic randomness. This would
allow the natural stochasticity of electronic devices
to play a part in large-scale parallel computations,
relieving the burden imposed by requiring absolute
reliability.
Realizing the potential of probabilistic neural com-
putation requires rethinking conventional parallel
algorithms to incorporate stochastic elements from the
bottom up. Additionally, techniques for controlling the
randomness must be developed so that useful random
numbers can be produced efficiently from the desired
distributions. In this work, we propose neuromorphic
circuits that demonstrate the capacity for intrinsic
randomness to solve parallel computing problems
and techniques for controlling device randomness to
produce useful random numbers.
MAXCUT is a well known, NP-complete problem
that has practical applications and serves as a model
problem and testbed for both classical and beyond-
Moore algorithm development [4], [9], [15], [19]. The
problem requires partitioning the vertices of a graph
into two disjoint classes such that the number of edges
that span classes is maximized. MAXCUT has several
stochastic approximation algorithms, which makes it
an ideal target for developing new architectures lever-
aging large-scale parallel stochastic circuit elements
for computational benefit.
Stochastic approximation algorithms are compared
via their approximation ratio, which is the ratio of the
expected value of a stochastically generated solution
to the maximum possible value. The stochastic approx-
imation to MAXCUT with the largest known approxi-
mation ratio is the Goemans-Williamson algorithm [9].
The Goemans-Williamson algorithm provides the best
approximation ratio achievable by any polynomial-
time algorithm under the Unique Games Conjec-
ture [19]. To generate solutions, this algorithm requires
sampling from a Gaussian distribution with a specific
covariance matrix obtained by solving a semi-definite
program related to the adjacency matrix of the graph.
Our first neural circuit implements this sampling step
by using simple neuron models to transform uniform
device randomness into the required distribution. This
demonstrates the use of neuromorphic principles to
transform an intrinsic source of randomness into a
computationally useful distribution.
Another stochastic approximation for MAXCUT
is the Trevisan algorithm [27], [29]. Despite having
a worse theoretical approximation ratio, in practice
this algorithm generates solutions on par with the
Goemans-Williamson algorithm [21]. To generate
solutions, this algorithm requires computing the mini-
mum eigenvector of the normalized adjacency matrix.
Our second neuromorphic circuit implements this
algorithm using the same circuit motif as above to
generate random numbers with a specific correlation,
but instead of sampling cuts from this distribution,
we use these numbers to drive a synaptic plasticity
rule (Oja’s rule) inspired by the Hebbian principle in
neuroscience [24]. This learning rule can be shown
to converge to the desired eigenvector, from which
the solution can be sampled. This circuit solves the
MAXCUT problem entirely within the circuit, without
requiring any external preprocessing, demonstrating
the capacity of neuromorphic circuits driven by
intrinsic randomness to solve parallel computationally-
relevant problems.
Neuromorphic computing is having increasing im-
pacts on non-cognitive problems relevant for parallel
computing [1]. Unlike other hardware approaches
to MAXCUT, our contributions directly instantiate
state-of-the-art MAXCUT approximation algorithms
on arbitrary graphs without requiring costly recon-
figuration or conversion of the problem to an Ising
model with pairwise interactions [10], [11], [30].
Our use of hardware resources is scalable, requiring
one neuron and one random device per vertex, and
thus more efficient than parallel implementations of
MAXCUT using GPUs [8]. These properties make
our contributions valuable to the expanding field of
beyond-Moore parallel algorithms.
II. MAXCUT ALGORITHMS
A. The Goemans-Williamson MAXCUT Algorithm
Given an
n
-vertex,
m
-edge graph
G= (V, E)
with
vertex set
V
and edge set
E
, the MAXCUT problem
seeks a partition of the vertices into two disjoint
subsets,
V=V−1∪V1
such that the number of edges
that cross between the two subsets is maximized. By
assigning either of the values
{−1,1}
to each vertex,
the MAXCUT problem is equivalent to maximizing
the function
max
v
1
2X
ij∈V
Aij (1 −vivj)
s.t. v∈ {−1,1}n.