Fatemeh FarokhmaneshMohammad Taghi Sadeghi
Feature selection is an important method of data dimensionality reduction widely used in machine learning. In this framework, the sparse representation based feature selection methods are very attractive. This is because of the nature of these methods which try to represent a data with as less as possible non-zero coefficients. In deep neural networks, a very high dimensional feature space is usually existed. In such a situation, one can take advantages of the feature selection approaches into account. In this paper, first, three sparse feature selection methods are compared. The Sparse Group Lasso (SGL) algorithm is one of the adopted approaches. This method is theoretically very well-organized and leads to good results for man-made features. The most important property of this method is that it highly induces the sparsity to the data. A main step of the SGL method is the features grouping step. In this paper, a k-means clustering based method is applied for grouping of the features. Our experimental results show that this sparse representation based method leads to very successful results in deep neural networks.
Paula L. Amaral SantosSultan ImangaliyevKlamer SchutteEvgeni Levin
Lipeng ChenDaixi JiaSheng YangFengge WuJunsuo Zhao
Huaqing ZhangJian WangZhanquan SunJacek M. ŻuradaNikhil R. Pal