DISSERTATION

Incremental extreme learning machine

Abstract

This new theory shows that in order to let SLFNs work as universal approximators, one may simply randomly choose input-to-hidden nodes, and then we only need to adjust the output weights linking the hidden layer and the output layer. In such SLFNs implementations, the activation functions for additive nodes can be any bounded nonconstant piecewise continuous functions or the activation functions for RBF nodes can be any integrable piecewise continuous functions.We propose two incremental algorithms:1) Incremental extreme learning machine (I-ELM) 2) Convex I-ELM (CI-ELM).

Keywords:
Activation function Artificial neural network Feed forward Context (archaeology) Feedforward neural network Piecewise Radial basis function Computer science Bounded function Function (biology) Implementation Artificial intelligence Mathematics Engineering Control engineering

Metrics

1
Cited By
0.00
FWCI (Field Weighted Citation Impact)
154
Refs
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Machine Learning and ELM
Physical Sciences →  Computer Science →  Artificial Intelligence
Neural Networks and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence
Stochastic Gradient Optimization Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

JOURNAL ARTICLE

Convex incremental extreme learning machine

Guang-Bin HuangLihui Chen

Journal:   Neurocomputing Year: 2007 Vol: 70 (16-18)Pages: 3056-3062
JOURNAL ARTICLE

Sparse pseudoinverse incremental extreme learning machine

Peyman Hosseinzadeh KassaniAndrew Beng Jin TeohEuntai Kim

Journal:   Neurocomputing Year: 2018 Vol: 287 Pages: 128-142
JOURNAL ARTICLE

Length-Changeable Incremental Extreme Learning Machine

Youxi WuDong LiuHe Jiang

Journal:   Journal of Computer Science and Technology Year: 2017 Vol: 32 (3)Pages: 630-643
© 2026 ScienceGate Book Chapters — All rights reserved.