Spiking neural networks (SNNs), inspired by the biological neural processing of the brain, are vastly growing due to their higher potential to handle spatiotemporal patterns with lower energy consumption, especially, if implemented on neuromorphic devices. In this study, we propose self-supervised contrastive learning (SSL) for SNNs to learn informative latent representations from a large set of unlabeled data. The proposed SSL pre-trained SNN is then fine-tuned on a small set of labeled samples of a downstream supervised task. To evaluate the proposed method, we trained convolutional SNNs using SSL on MNIST and CIFAR10 datasets with 80% of images as unlabeled samples, then fine-tuned the networks on the remaining 20% images. The proposed SSL-based SNNs could reach 94.23% and 62.24% recognition accuracies on testing sets of MNIST and CIFAR10, respectively.
Haonan QiuZeyin SongYanqi ChenMunan NingWei FangTao SunZhengyu MaYuan LiYonghong Tian
Cuiying HuoDongxiao HeYawen LiDi JinJianwu DangWitold PedryczLingfei WuWeixiong Zhang