JOURNAL ARTICLE

A new learning algorithm for feedforward neural networks

Abstract

We develop in the present paper a constructive learning algorithm for feedforward neural networks. We employ an incremental training procedure where training patterns are learned one by one. Our algorithm starts with a single training pattern and a single hidden layer neuron. During the course of neural network training, when the algorithm gets stuck in a local minimum, we will attempt to escape from the local minimum by using the weight scaling technique. It is only after several consecutive failed attempts in escaping from a local minimum, we will allow the network to grow by adding a hidden layer neuron. At this stage, we employ an optimization procedure based on quadratic/linear programming to select initial weights for the newly added neuron. Our optimization procedure tends to make the network reach the error tolerance with no or little training after adding a hidden layer neuron Our simulation results indicate that the present constructive algorithm can obtain neural networks very close to minimal structures and that convergence (to a solution) in neural network training can be guaranteed. We tested our algorithm extensively using the parity problem.

Keywords:
Computer science Artificial neural network Constructive Feedforward neural network Algorithm Convergence (economics) Quadratic programming Time delay neural network Artificial intelligence Mathematical optimization Mathematics

Metrics

5
Cited By
0.37
FWCI (Field Weighted Citation Impact)
26
Refs
0.66
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Neural Networks and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence
Machine Learning and ELM
Physical Sciences →  Computer Science →  Artificial Intelligence
Face and Expression Recognition
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.