Optimal ANN-SNN Conversion for Fast and Accurate Inference in Deep Spiking Neural Networks
Optimal ANN-SNN Conversion for Fast and Accurate Inference in Deep Spiking Neural Networks
Jianhao Ding, Zhaofei Yu, Yonghong Tian, Tiejun Huang
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 2328-2336.
https://doi.org/10.24963/ijcai.2021/321
Spiking Neural Networks (SNNs), as bio-inspired energy-efficient neural networks, have attracted great attentions from researchers and industry. The most efficient way to train deep SNNs is through ANN-SNN conversion. However, the conversion usually suffers from accuracy loss and long inference time, which impede the practical application of SNN. In this paper, we theoretically analyze ANN-SNN conversion and derive sufficient conditions of the optimal conversion. To better correlate ANN-SNN and get greater accuracy, we propose Rate Norm Layer to replace the ReLU activation function in source ANN training, enabling direct conversion from a trained ANN to an SNN. Moreover, we propose an optimal fit curve to quantify the fit between the activation value of source ANN and the actual firing rate of target SNN. We show that the inference time can be reduced by optimizing the upper bound of the fit curve in the revised ANN to achieve fast inference. Our theory can explain the existing work on fast reasoning and get better results. The experimental results show that the proposed method achieves near loss-less conversion with VGG-16, PreActResNet-18, and deeper structures. Moreover, it can reach 8.6× faster reasoning performance under 0.265× energy consumption of the typical method. The code is available at https://github.com/DingJianhao/OptSNNConvertion-RNL-RIL.
Keywords:
Machine Learning: Deep Learning
Humans and AI: Cognitive Modeling