Abstract

In this paper, we propose an adaptive learning rate algorithm for gradient descent method used in Artificial Neural Network (ANN). The proposed adaptive learning rate neural network algorithm makes use of an initial assigned learning rate and after a set of backpropagation rounds the learning rate gets revised to a learning rate which is equal to minimum learning rate plus average of minimum learning rate and maximum learning rate (these parameters are updated in each recursive calls). During the next recursive call of the algorithm, the revised learning rate is used, if the average total error decreases, or the previous learning rate is continued. This is done until the desired learning rate is achieved which results in optimal weight parameters with high classification accuracy. We have compared the proposed algorithms with existing heuristics approaches and found that the convergence rate is to be faster and the proposed algorithm returns the optimized weights with higher classification accuracy. We have also tested the effectiveness of the algorithm on spam email classification using multilayer neural networks and found that it performs better than the existing approach and less prone to overfitting problems. The proposed algorithm results in 99.12% accuracy for the email classification.

Keywords:
Computer science Artificial neural network Overfitting Artificial intelligence Backpropagation Machine learning Rate of convergence Gradient descent Heuristics Word error rate Algorithm Key (lock)

Metrics

2
Cited By
0.00
FWCI (Field Weighted Citation Impact)
12
Refs
0.22
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Machine Learning and ELM
Physical Sciences →  Computer Science →  Artificial Intelligence
Neural Networks and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence
Face and Expression Recognition
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.