Kazuaki AokiToshiharu WATANABEMineichi Kudo
Abstract In pattern recognition, feature selection is effective for improving the performance of classifiers and reducing the measurement cost of features. In particular, by removing features with no discriminative information, an improvement can be expected in the estimation precision of classifier parameters, and as a result, higher performance classifiers can be constructed than when all of the features are used. Many of the feature selection techniques that have been proposed so far have attempted to select a feature subset that is common to all classes. However, it seems reasonable to assume that the optimum feature subset for classification differs for each set of classes to be discriminated. In this paper, the authors investigate the effectiveness of feature subsets that depend on sets of classes and use the extracted class‐dependent feature subsets to construct a decision tree. In addition, they show the effectiveness of this method through character recognition experiments. © 2005 Wiley Periodicals, Inc. Syst Comp Jpn, 36(4): 37–47, 2005; Published online in Wiley InterScience ( www.interscience.wiley.com ). DOI 10.1002/scj.10666
Anilú Franco-ÁrcegaJesús Ariel Carrasco-OchoaGuillermo Sánchez-DíazJosé Fco. Martínez-Trinidad
Wenhua XuZheng QinHao HuNan Zhao