JOURNAL ARTICLE

Incremental Multiple Hidden Layers Regularized Extreme Learning Machine Based on Forced Positive‐Definite Cholesky Factorization

Jingyi LiuBa Tuan Le

Year: 2019 Journal:   Mathematical Problems in Engineering Vol: 2019 (1)   Publisher: Hindawi Publishing Corporation

Abstract

The theory and implementation of extreme learning machine (ELM) prove that it is a simple, efficient, and accurate machine learning method. Compared with other single hidden layer feedforward neural network algorithms, ELM is characterized by simpler parameter selection rules, faster convergence speed, and less human intervention. The multiple hidden layer regularized extreme learning machine (MRELM) inherits these advantages of ELM and has higher prediction accuracy. In the MRELM model, the number of hidden layers is randomly initiated and fixed, and there is no iterative tuning process. However, the optimal number of hidden layers is the key factor to determine the generalization ability of MRELM. Given this situation, it is obviously unreasonable to determine this number by trial and random initialization. In this paper, an incremental MRELM training algorithm (FC‐IMRELM) based on forced positive‐definite Cholesky factorization is put forward to solve the network structure design problem of MRELM. First, an MRELM‐based prediction model with one hidden layer is constructed, and then a new hidden layer is added to the prediction model in each training step until the generalization performance of the prediction model reaches its peak value. Thus, the optimal network structure of the prediction model is determined. In the training procedure, forced positive‐definite Cholesky factorization is used to calculate the output weights of MRELM, which avoids the calculation of the inverse matrix and Moore‐Penrose generalized inverse of matrix involved in the training process of hidden layer parameters. Therefore, FC‐IMRELM prediction model can effectively reduce the computational cost brought by the process of increasing the number of hidden layers. Experiments on classification and regression problems indicate that the algorithm can be effectively used to determine the optimal network structure of MRELM, and the prediction model training by the algorithm has excellent performance in prediction accuracy and computational cost.

Keywords:
Cholesky decomposition Initialization Extreme learning machine Incomplete Cholesky factorization Generalization Computer science Algorithm Artificial neural network Inverse Positive-definite matrix Convergence (economics) Machine learning Artificial intelligence Mathematical optimization Mathematics

Metrics

3
Cited By
0.46
FWCI (Field Weighted Citation Impact)
28
Refs
0.70
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Machine Learning and ELM
Physical Sciences →  Computer Science →  Artificial Intelligence
Advanced Memory and Neural Computing
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
Extracellular vesicles in disease
Life Sciences →  Biochemistry, Genetics and Molecular Biology →  Molecular Biology

Related Documents

JOURNAL ARTICLE

QR factorization based Incremental Extreme Learning Machine with growth of hidden nodes

Yibin YeQin Yang

Journal:   Pattern Recognition Letters Year: 2015 Vol: 65 Pages: 177-183
JOURNAL ARTICLE

Positive Definite Hankel Matrices Using Cholesky Factorization

Suliman Al‐HomidanMohammed Alshahrani

Journal:   Computational Methods in Applied Mathematics Year: 2009 Vol: 9 (3)Pages: 221-225
© 2026 ScienceGate Book Chapters — All rights reserved.