Pruning of Deep Spiking Neural Networks through Gradient Rewiring
Pruning of Deep Spiking Neural Networks through Gradient Rewiring
Yanqi Chen, Zhaofei Yu, Wei Fang, Tiejun Huang, Yonghong Tian
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 1713-1721.
https://doi.org/10.24963/ijcai.2021/236
Spiking Neural Networks (SNNs) have been attached great importance due to their biological plausibility and high energy-efficiency on neuromorphic chips. As these chips are usually resource-constrained, the compression of SNNs is thus crucial along the road of practical use of SNNs. Most existing methods directly apply pruning approaches in artificial neural networks (ANNs) to SNNs, which ignore the difference between ANNs and SNNs, thus limiting the performance of the pruned SNNs. Besides, these methods are only suitable for shallow SNNs. In this paper, inspired by synaptogenesis and synapse elimination in the neural system, we propose gradient rewiring (Grad R), a joint learning algorithm of connectivity and weight for SNNs, that enables us to seamlessly optimize network structure without retraining. Our key innovation is to redefine the gradient to a new synaptic parameter, allowing better exploration of network structures by taking full advantage of the competition between pruning and regrowth of connections. The experimental results show that the proposed method achieves minimal loss of SNNs' performance on MNIST and CIFAR-10 datasets so far. Moreover, it reaches a ~3.5% accuracy loss under unprecedented 0.73% connectivity, which reveals remarkable structure refining capability in SNNs. Our work suggests that there exists extremely high redundancy in deep SNNs. Our codes are available at https://github.com/Yanqi-Chen/Gradient-Rewiring.
Keywords:
Humans and AI: Brain Sciences
Humans and AI: Cognitive Modeling
Machine Learning: Classification