George M. GeorgiouCris Koutsougeras
The search space for backpropagation (BP) is usually of high dimensionality which shows convergence. Also, the number of minima abound, and thus the danger to fall in a shallow one is great. In order to limit the search space of BP in a sensible way, we incorporate domain knowledge in the training process. A Two-phase Backpropagation algorithms is presented. In the first phase the direction of the weight vectors of the first (and possibly the only) hidden layer are constrained to remain in the same directions as, for example, the ones of linear discriminants or Principal Components. The directions are chosen based on the problem at hand. Then in the second phase, the constraints are removed and standard Backpropagation algorithm takes over to further minimize the error function. The first phase swiftly situates the weight vectors in a good position (relatively low error) which can serve as the initialization of the standard Backpropagation. Other speed-up techniques can be used in both phases. The generality of its application, its simplicity, and the shorter training time it requires, make this approach attractive.
M. T. SandfordJ.N. BradleyTheodore G Handel
Ada CammilleriEduardo P. Serrano
Barbara SeidersDennis McQuerryThomas A. FerrymanPaul WhitneyAnthony Rybka