JOURNAL ARTICLE

Self-Supervised Contrastive Learning In Spiking Neural Networks

Abstract

Spiking neural networks (SNNs), inspired by the biological neural processing of the brain, are vastly growing due to their higher potential to handle spatiotemporal patterns with lower energy consumption, especially, if implemented on neuromorphic devices. In this study, we propose self-supervised contrastive learning (SSL) for SNNs to learn informative latent representations from a large set of unlabeled data. The proposed SSL pre-trained SNN is then fine-tuned on a small set of labeled samples of a downstream supervised task. To evaluate the proposed method, we trained convolutional SNNs using SSL on MNIST and CIFAR10 datasets with 80% of images as unlabeled samples, then fine-tuned the networks on the remaining 20% images. The proposed SSL-based SNNs could reach 94.23% and 62.24% recognition accuracies on testing sets of MNIST and CIFAR10, respectively.

Keywords:
Computer science Artificial intelligence Spiking neural network Artificial neural network Supervised learning Machine learning

Metrics

5
Cited By
1.85
FWCI (Field Weighted Citation Impact)
25
Refs
0.79
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Advanced Memory and Neural Computing
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
Neural Networks and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence
Neural Networks and Reservoir Computing
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.