Stochastic Neuromorphic Circuits for Solving MAXCUT Bradley H. Theilman

2025-05-03 0 0 765.82KB 10 页 10玖币
侵权投诉
Stochastic Neuromorphic Circuits for
Solving MAXCUT
Bradley H. Theilman
Neural Exploration and Research Lab
Sandia National Laboratories
Albuquerque, New Mexico
bhtheil@sandia.gov
Yipu Wang
Discrete Math and Optimization
Sandia National Laboratories
Albuquerque, New Mexico
yipwang@sandia.gov
Ojas Parekh
Discrete Math and Optimization
Sandia National Laboratories
Albuquerque, New Mexico
odparek@sandia.gov
William Severa
Neural Exploration and Research Lab
Sandia National Laboratories
Albuquerque, New Mexico
wmsever@sandia.gov
J. Darby Smith
Neural Exploration and Research Lab
Sandia National Laboratories
Albuquerque, New Mexico
jsmit16@sandia.gov
James B. Aimone
Neural Exploration and Research Lab
Sandia National Laboratories
Albuquerque, New Mexico
jbaimon@sandia.gov
Abstract—Finding the maximum cut of a graph
(MAXCUT) is a classic optimization problem that has
motivated parallel algorithm development. While ap-
proximate algorithms to MAXCUT offer attractive the-
oretical guarantees and demonstrate compelling empir-
ical performance, such approximation approaches can
shift the dominant computational cost to the stochastic
sampling operations. Neuromorphic computing, which
uses the organizing principles of the nervous system
to inspire new parallel computing architectures, offers
a possible solution. One ubiquitous feature of natural
brains is stochasticity: the individual elements of bio-
logical neural networks possess an intrinsic randomness
that serves as a resource enabling their unique compu-
tational capacities. By designing circuits and algorithms
that make use of randomness similarly to natural
brains, we hypothesize that the intrinsic randomness in
microelectronics devices could be turned into a valuable
component of a neuromorphic architecture enabling
more efficient computations. Here, we present neuro-
morphic circuits that transform the stochastic behavior
of a pool of random devices into useful correlations
that drive stochastic solutions to MAXCUT. We show
that these circuits perform favorably in comparison
to software solvers and argue that this neuromorphic
hardware implementation provides a path for scaling
advantages. This work demonstrates the utility of com-
bining neuromorphic principles with intrinsic random-
ness as a computational resource for new computational
architectures.
I. INTRODUCTION
Despite the heavy requirements for noise-free
operation placed on the components of conventional
computers, random numbers play a crucially important
role in many parallel computing problems arising in
different scientific domains. Because current random
number generation occurs largely in software, the
required randomness in these systems is plagued
by the same memory-processing bottlenecks that
limit ordinary computation. Current work in material
science and microelectronics is demonstrating the
feasibility of constructing stochastic microelectronic
devices with controllable statistics for probabilistic
neural computing [22]. These devices show scalability
properties that forecast the ability to generate ran-
dom numbers in-situ with the processing elements,
bypassing this bottleneck.
Stochasticity is an inherent property of physical
systems, both natural and artificial. Physical com-
puters are able to approximate ideal computations
because much effort has been expended in developing
electronic technology that minimizes the influence of
universal electronic “noise.” As microelectronics get
smaller and the scale of our computations get larger,
current computational paradigms require even more
stringent limits on the influence of this electronic
noise, and these limits become severe constraints on
the scalability of existing computational architectures.
In contrast, natural brains are examples of highly
parallel computational systems that achieve amazingly
efficient computational performance in the face of
ubiquitous noise. There are on the order of
1015
arXiv:2210.02588v1 [cs.NE] 5 Oct 2022
synapses in a human brain, and each one is stochastic:
its probability of successfully transmitting a signal to a
downstream neuron ranges from 0.1 to 0.9 [20]. Each
synapse is activated about once per second on average,
so, the brain generates about
1015
random numbers
per second [22]. Compare this to the reliability of
transistor switching in conventional computers, where
the probability of failure is less than
1014
[20]. It is
unknown precisely how brains deal with this stochas-
ticity, but its pervasiveness strongly suggests that the
brain uses its own randomness as a computational
resource rather than treating it as a defect that must be
eliminated. This suggests that a new class of parallel
computing architectures could emerge from combining
the computational principles of natural brains with
physical sources of intrinsic randomness. This would
allow the natural stochasticity of electronic devices
to play a part in large-scale parallel computations,
relieving the burden imposed by requiring absolute
reliability.
Realizing the potential of probabilistic neural com-
putation requires rethinking conventional parallel
algorithms to incorporate stochastic elements from the
bottom up. Additionally, techniques for controlling the
randomness must be developed so that useful random
numbers can be produced efficiently from the desired
distributions. In this work, we propose neuromorphic
circuits that demonstrate the capacity for intrinsic
randomness to solve parallel computing problems
and techniques for controlling device randomness to
produce useful random numbers.
MAXCUT is a well known, NP-complete problem
that has practical applications and serves as a model
problem and testbed for both classical and beyond-
Moore algorithm development [4], [9], [15], [19]. The
problem requires partitioning the vertices of a graph
into two disjoint classes such that the number of edges
that span classes is maximized. MAXCUT has several
stochastic approximation algorithms, which makes it
an ideal target for developing new architectures lever-
aging large-scale parallel stochastic circuit elements
for computational benefit.
Stochastic approximation algorithms are compared
via their approximation ratio, which is the ratio of the
expected value of a stochastically generated solution
to the maximum possible value. The stochastic approx-
imation to MAXCUT with the largest known approxi-
mation ratio is the Goemans-Williamson algorithm [9].
The Goemans-Williamson algorithm provides the best
approximation ratio achievable by any polynomial-
time algorithm under the Unique Games Conjec-
ture [19]. To generate solutions, this algorithm requires
sampling from a Gaussian distribution with a specific
covariance matrix obtained by solving a semi-definite
program related to the adjacency matrix of the graph.
Our first neural circuit implements this sampling step
by using simple neuron models to transform uniform
device randomness into the required distribution. This
demonstrates the use of neuromorphic principles to
transform an intrinsic source of randomness into a
computationally useful distribution.
Another stochastic approximation for MAXCUT
is the Trevisan algorithm [27], [29]. Despite having
a worse theoretical approximation ratio, in practice
this algorithm generates solutions on par with the
Goemans-Williamson algorithm [21]. To generate
solutions, this algorithm requires computing the mini-
mum eigenvector of the normalized adjacency matrix.
Our second neuromorphic circuit implements this
algorithm using the same circuit motif as above to
generate random numbers with a specific correlation,
but instead of sampling cuts from this distribution,
we use these numbers to drive a synaptic plasticity
rule (Oja’s rule) inspired by the Hebbian principle in
neuroscience [24]. This learning rule can be shown
to converge to the desired eigenvector, from which
the solution can be sampled. This circuit solves the
MAXCUT problem entirely within the circuit, without
requiring any external preprocessing, demonstrating
the capacity of neuromorphic circuits driven by
intrinsic randomness to solve parallel computationally-
relevant problems.
Neuromorphic computing is having increasing im-
pacts on non-cognitive problems relevant for parallel
computing [1]. Unlike other hardware approaches
to MAXCUT, our contributions directly instantiate
state-of-the-art MAXCUT approximation algorithms
on arbitrary graphs without requiring costly recon-
figuration or conversion of the problem to an Ising
model with pairwise interactions [10], [11], [30].
Our use of hardware resources is scalable, requiring
one neuron and one random device per vertex, and
thus more efficient than parallel implementations of
MAXCUT using GPUs [8]. These properties make
our contributions valuable to the expanding field of
beyond-Moore parallel algorithms.
II. MAXCUT ALGORITHMS
A. The Goemans-Williamson MAXCUT Algorithm
Given an
n
-vertex,
m
-edge graph
G= (V, E)
with
vertex set
V
and edge set
E
, the MAXCUT problem
seeks a partition of the vertices into two disjoint
subsets,
V=V1V1
such that the number of edges
that cross between the two subsets is maximized. By
assigning either of the values
{−1,1}
to each vertex,
the MAXCUT problem is equivalent to maximizing
the function
max
v
1
2X
ijV
Aij (1 vivj)
s.t. v∈ {−1,1}n.
摘要:

StochasticNeuromorphicCircuitsforSolvingMAXCUTBradleyH.TheilmanNeuralExplorationandResearchLabSandiaNationalLaboratoriesAlbuquerque,NewMexicobhtheil@sandia.govYipuWangDiscreteMathandOptimizationSandiaNationalLaboratoriesAlbuquerque,NewMexicoyipwang@sandia.govOjasParekhDiscreteMathandOptimizationSand...

展开>> 收起<<
Stochastic Neuromorphic Circuits for Solving MAXCUT Bradley H. Theilman.pdf

共10页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:10 页 大小:765.82KB 格式:PDF 时间:2025-05-03

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 10
客服
关注