A New Layer by Layer (NLBL) training algorithm for speeding up the training of multilayer feedforward neural networks is presented in this paper. It uses an approach similar to that of the Layer by Layer (LBL) algorithm, taking into account the input errors of the output layer and hidden layer. The proposed NLBL algorithm, however, is not burdened by the need to calculate the gradient of the error function. Furthermore, it has avoided the stalling problem exists in the LBL algorithm. In each iteration step, the weights or thresholds can be optimized directly one by one with other variables fixed. Four classes of solution equations for parameters of networks are deducted. In comparisons with the BP algorithm with momentum (BPM) and the conventional LBL algorithms, NLBL algorithm obtains faster convergences and better simulation performances when applied into a real world oil-gas prediction problem.
Feng LiJacek M. ŻuradaYan LiuWei Wu
Xiao-Hu YuN.K. LohWilliam C. Miller