
1
SpikeSim: An end-to-end Compute-in-Memory
Hardware Evaluation Tool for Benchmarking
Spiking Neural Networks
Abhishek Moitra∗,Student Member, IEEE, Abhiroop Bhattacharjee∗,Student Member, IEEE, Runcong Kuang,
Gokul Krishnan, Member, IEEE, Yu Cao, Fellow, IEEE, and Priyadarshini Panda, Member, IEEE
Abstract—Spiking Neural Networks (SNNs) are an active
research domain towards energy efficient machine intelligence.
Compared to conventional artificial neural networks (ANNs),
SNNs use temporal spike data and bio-plausible neuronal ac-
tivation functions such as Leaky-Integrate Fire/Integrate Fire
(LIF/IF) for data processing. However, SNNs incur significant
dot-product operations causing high memory and computation
overhead in standard von-Neumann computing platforms. To
this end, In-Memory Computing (IMC) architectures have been
proposed to alleviate the “memory-wall bottleneck” prevalent
in von-Neumann architectures. Although recent works have
proposed IMC-based SNN hardware accelerators, the following
key implementation aspects have been overlooked 1) the adverse
effects of crossbar non-ideality on SNN performance due to
repeated analog dot-product operations over multiple time-steps
2) hardware overheads of essential SNN-specific components
such as the LIF/IF and data communication modules. To this
end, we propose SpikeSim, a tool that can perform realistic
performance, energy, latency and area evaluation of IMC-
mapped SNNs. SpikeSim consists of a practical monolithic IMC
architecture called SpikeFlow for mapping SNNs. Additionally,
the non-ideality computation engine (NICE) and energy-latency-
area (ELA) engine performs hardware-realistic evaluation of
SpikeFlow-mapped SNNs. Based on 65nm CMOS implementa-
tion and experiments on CIFAR10, CIFAR100 and TinyImagenet
datasets, we find that the LIF/IF neuronal module has significant
area contribution (>11% of the total hardware area). To this
end, we propose SNN topological modifications that leads to
1.24×and 10×reduction in the neuronal module’s area and the
overall energy-delay-product value, respectively. Furthermore,
in this work, we perform a holistic comparison between IMC
implemented ANN and SNNs and conclude that lower number of
time-steps are the key to achieve higher throughput and energy-
efficiency for SNNs compared to 4-bit ANNs. The code repository
for the SpikeSim tool will be made available in this Github link.
Index Terms—Spiking Neural Networks (SNNs), In-Memory
Computing, Emerging Devices, Analog Crossbars
I. INTRODUCTION
In the last decade, Spiking Neural Networks (SNNs) have
gained significant attention in the context of energy-efficient
machine intelligence [1]. SNNs encode input data information
with discrete binary spikes over multiple time-steps making
∗These authors have contributed equally to this work.
Abhishek Moitra, Abhiroop Bhattacharjee, and Priyadarshini Panda are with
the Department of Electrical Engineering, Yale University, New Haven, CT,
USA.
Runcong Kuang, Gokul Krishnan, and Yu Cao are with the School of
Electrical, Computer, and Energy Engineering, Arizona State University,
Tempe 85287, AZ.
TABLE I: Table showing qualitative comparison of SpikeSim with
related works. I- Inference, T- Training, VN- von-Neumann, IMC- In-
memory Computing, ELA- Energy, Latency & Area, M- Monolithic
and C- Chiplet Architecture.
Work Platform I / T Non-
Ideality
ELA
Evaluation
ANN
Eyeriss [13] VN-M I 7 3
Neurosim [14] IMC-M I 7 3
CrossSim [15] IMC-M I 7 3
RxNN [16] IMC-M I 3 7
SIAM [17] IMC-C I 3 3
SNN
Loihi [4], TrueNorth [5] VN-M I 7 7
SpinalFlow [6], PTB [7] VN-M I 7 7
H2Learn [18], SATA [19] VN-M T 7 3
RESPARC [9] IMC-M I 7 7
SpikeSim (ours) IMC-M I 3 3
them highly suitable for asynchronous event-driven input pro-
cessing applications [2], [3]. Recent works have proposed full-
scale general-purpose von-Neumann architectures leveraging
the temporal processing property of SNNs [4], [5]. Other
works such as [6], [7] have proposed novel dataflow to
minimize the hardware overhead in von-Neumann implemen-
tation of SNNs. However, SNNs like conventional Artificial
Neural Networks (ANNs) entail significant dot-product op-
erations leading to high memory and energy overhead when
implemented on traditional von-Neumann architectures (due
to the “memory wall bottleneck”) [8], [9]. To this end, analog
In-Memory Computing (IMC) architectures [10]–[12] have
been proposed to perform analog dot-product or Multiply-
and-Accumulate (MAC) operations to achieve high memory
bandwidth and compute parallelism, thereby overcoming the
“memory wall bottleneck”.
Being an emerging and heavily researched computing
paradigm, IMC architectures require hardware evaluation plat-
forms for fast and accurate algorithm benchmarking. To this
effect, many state-of-the-art hardware evaluation frameworks
[14]–[17] have been proposed for realistic evaluation of IMC-
mapped ANNs. However, they are unsuitable for hardware-
realistic SNN evaluations as they lack key architectural modi-
fications required for temporal spike processing and non-linear
activation functions, such as Leaky Integrate Fire or Integrate
Fire (LIF/IF). In the context of hardware evaluation platforms
for SNNs, works such as [18], [19] have been proposed
for benchmarking SNN training on digital CMOS platforms.
Additionally, works such as [9] propose IMC architectures for
SNN inference. However, they lack several practical archi-
tectural considerations such as non-idealities incurred during
analog MAC computations [20]–[22], data communication
arXiv:2210.12899v1 [cs.NE] 24 Oct 2022