Spiking Neural Networks (SNNs) are an exciting prospect in the field of Artificial Neural Networks (ANNs).We try to replicate the massive interconnection of neurons, the computational units, evident in brain to perform useful task in ANNs, albeit with highly abstracted model of neurons.Mostly the artificial neurons are realized in the form of non-linear activation function which process numeric inputs and output.SNNs are less abstract than these systems with non-linear activation function in the sense that they make use of mathematical model of neurons, termed spiking neurons, which process inputs in the form of spikes and emits spike as output.This is exactly the way in which natural neurons exchange information.Since spikes are events in time, there is an extra dimension of time along with amplitude in SNNs which makes them suited to temporal processes.There are a few supervised learning algorithms for learning in SNN.As far as learning in multilayer architecture, we have SpikeProp and its extensions and Multi-ReSuMe.The SpikeProp methods are based on adaptation of backpropagation for SNNs and mostly consider first spike of the neuron.Original SpikeProp is usually slow and face stability issues during learning.Large learning rate and even very small learning rate often makes it unstable.The instability is observable in the form of sudden jumps in training error, called surge, which change the course of learning and often cause failure of the learning process as well.To introduce stability criterion, we present weight convergence analysis of Spike-Prop.Based on the convergence condition, we introduce an adaptive learning rate rule which selects suitable learning rate to guarantee convergence of learning process and large enough learning rate so that the learning process is fast enough.Based on performance on several benchmark problems, this method with learning rate adaptation, SpikePropAd, demonstrates less surges and faster learning as well compared to SpikeProp and its faster variant RProp.The performance is evaluated broadly in terms of speed of learning, rate of successful learning.We also consider the internal and external disturbances to the learning process and provide a thorough error analysis in addition to weight convergence analysis.We use conic sector stability theory to determine the conditions for making the learning process stable in L 2 space and extend the result for L ∞ stability.L 2 stability in theory requires the disturbance to die out after a certain period of time whereas the L ∞ stability implies that the system is stable provided the disturbance is within bounds.We explore two approaches for robust stability iii
Chentao FuShuiying XiangYanan HanZiwei SongYue Hao