We develop in the present paper a constructive learning algorithm for feedforward neural networks. We employ an incremental training procedure where training patterns are learned one by one. Our algorithm starts with a single training pattern and a single hidden layer neuron. During the course of neural network training, when the algorithm gets stuck in a local minimum, we will attempt to escape from the local minimum by using the weight scaling technique. It is only after several consecutive failed attempts in escaping from a local minimum, we will allow the network to grow by adding a hidden layer neuron. At this stage, we employ an optimization procedure based on quadratic/linear programming to select initial weights for the newly added neuron. Our optimization procedure tends to make the network reach the error tolerance with no or little training after adding a hidden layer neuron Our simulation results indicate that the present constructive algorithm can obtain neural networks very close to minimal structures and that convergence (to a solution) in neural network training can be guaranteed. We tested our algorithm extensively using the parity problem.
Jianxin ZhouXiaoqin ZengPatrick P. K. Chan
Roman NerudaArnošt ŠtědrýJitka Drkošová
Xinghuo YuMehmet Önder EfeOkyay Kaynak