The number of hidden nodes have strong influence on the accuracy of ELM (Extreme Learning Machine). More hidden nodes are needed as the increase of the size of training data set. Either ELM or Multi-hidden layer neural network make must be set the number of hidden layer in advance and then increase the number of nodes in every layer to achieve a smaller RMSE. Thus the computational complexity of the involved matrix becomes bigger and the learning efficiency will become worser. In this paper extreme learning machine with incremental hidden layers is proposed, in which the weights of hidden nodes can be assigned randomly to the current hidden layer nodes (small number, don't optimize and small complexity) and then the corresponding RMSE is obtained like ELM. MHL-ELM increases a hidden layer and repeats the method of "layer-wise pre-training" unless the RMSE is smaller than what we want. The complexity of MHL-ELM is ΣMl=1(N3l). Compared with some traditional algorithms like BP or OP-ELM, MHL-ELM can produce a better generalization performance, a smaller RMSE and a faster learning time on ten UCI, keel data sets and real data sets.
Guang-Bin HuangMing-Bin LiLei ChenChee‐Kheong Siew
Guorui FengGuang-Bin HuangQingping LinRobert Gay
Mengcan MinXiaofang ChenYongxiang LeiZhiwen ChenYongfang Xie