Real Spike 3
they share the same convolution kernel or not. Unlike the DNNs, the shared con-
volution kernels will not bring SNNs the advantages of parameter reduction and
inference acceleration in this situation. Hence we argue that it would be better
to learn unshared convolution kernels for each output feature map in SNNs.
Unfortunately, whether in theory or technology, it is not feasible to directly
train an unshared convolution kernels-based SNN. First, there is no obvious proof
that learning different convolution kernels directly will surely benefit the network
performance. Second, due to the lack of mature development platforms for SNNs,
many efforts are focusing on training SNNs with DNN-oriented programming
frameworks, which usually do not support the learning of unshared convolution
kernels for each feature map directly. Considering these limitations, we focus on
training SNNs with unshared convolution kernels based on the modern DNN-
oriented frameworks indirectly.
Driven by the above reasons, a training-time and inference-time decoupled
SNN is proposed, where a neuron can emit real-valued spikes during training
but binary spikes during inference, dubbed Real Spike. The training-time real-
valued spikes can be converted to inference-time binary spikes via convolution
kernel re-parameterization and a shared convolution kernel, which can be de-
rived into multiples then (see details in Sec. 3.3). In this way, an SNN with
different convolution kernels for every output feature map can be obtained as
we expected. Specifically, in the training phase, the SNN will learn real-valued
spikes and a shared convolution kernel for every output feature map. While in
the inference phase, every real-valued spike will be transformed into a binary
spike by folding a part of the value to its corresponding kernel weight. Due to
the diversity of the real-valued spikes, by absorbing part of the value from each
real spike, the original convolution kernel shared by each output map can be
converted into multiple forms. Thus different convolution kernels for each fea-
ture map of SNNs can be obtained indirectly. It can be guaranteed theoretically
that the Real Spike method can improve the performance due to the richer
representation capability of real-valued spikes than binary spikes (see details in
Sec. 3.4). Besides, Real Spike is well compatible with present DNN-oriented
programming frameworks, and it still retains the advantages of DNN-oriented
frameworks in terms of the convolution kernel sharing mechanism in the train-
ing. Furthermore, we extract the essential idea of training-inference-decoupled
and extend Real Spike to a more generalized form, which is friendly to both
neuromorphic and non-neuromorphic hardwares (see details in Sec. 3.5). The
overall workflow of the proposed method is illustrated in Fig. 2.
Our main contributions are summarized as follows:
–We propose the Real Spike, a simple yet effective method to obtain SNNs
with unshared convolution kernels. The Real Spike-SNN can be trained in
DNN-oriented frameworks directly. It can effectively enhance the information
representation capability of the SNN without introducing training difficulty.
–The convolution kernel re-parameterization is introduced to decouple a
training-time SNN with real-valued spikes and shared convolution kernels,