JOURNAL ARTICLE

The Improved Training Algorithm of Back Propagation Neural Network with Self-adaptive Learning Rate

Abstract

This paper addresses the questions of improving convergence performance for back propagation (BP) neural network. For traditional BP neural network algorithm, the learning rate selection is depended on experience and trial. In this paper, based on Taylor formula the function relationship between the total quadratic training error change and connection weights and biases changes is obtained, and combined with weights and biases changes in batch BP learning algorithm, the formula for self-adaptive learning rate is given. Unlike existing algorithm, the self-adaptive learning rate depends on only neural network topology, training samples, average quadratic error and error curve surface gradient but not artificial selection. Simulation results show iteration times is significant less than that of traditional batch BP learning algorithm with constant learning rate.

Keywords:
Backpropagation Artificial neural network Computer science Rate of convergence Algorithm Convergence (economics) Quadratic equation Artificial intelligence Quadratic function Constant (computer programming) Selection (genetic algorithm) Error function Mathematics Key (lock)

Metrics

69
Cited By
5.27
FWCI (Field Weighted Citation Impact)
10
Refs
0.96
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Advanced Algorithms and Applications
Physical Sciences →  Engineering →  Control and Systems Engineering
Neural Networks and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence
Advanced Sensor and Control Systems
Physical Sciences →  Engineering →  Control and Systems Engineering
© 2026 ScienceGate Book Chapters — All rights reserved.