Hao ChenChulin ShaMingyin JiaoChangbin ShaoShang GaoHualong YuBin Qin
In order to continuously adapt to dynamic data distributions, existing incremental and online learning methods adopt bagging or boosting structures, in which some sub-classifiers are abandoned when the data distribution varies significantly in the learning process. As such, these ensemble classifiers may fail to reach the global optimum. Furthermore, the training of static sub-classifiers, which are dropped when concept drift emerges, leads to unnecessary computational costs. To solve these issues, this study proposes a novel training method consisting of a single dynamic classifier—named the dynamic incremental adaptive Takagi–Sugeno–Kang fuzzy classifier (DIA-TSK)—which leverages the superior non-linear modeling capabilities and interpretability of the TSK fuzzy system. DIA-TSK utilizes a multi-dimensional incremental learning strategy that is capable of dynamically learning from new data in real time while maintaining global optimal solutions across various online application scenarios. DIA-TSK incorporates two distinct learning paradigms: online learning (O-DIA-TSK) and batch incremental learning (B-DIA-TSK). These modules can work separately or collaborate synergistically to achieve rapid, precise and resource-efficient incremental learning. With the implementation of O-DIA-TSK, we significantly reduce the computational complexity in incremental processes, effectively addressing real-time learning requirements for high-frequency dynamic data streams. Moreover, the novel incremental update mechanism of O-DIA-TSK dynamically adjusts its parameters to ensure progressive optimization, enhancing both real-time performance and learning accuracy. For large-scale data sets, DIA-TSK evolves into B-DIA-TSK, which implements batch updates for multiple samples based on the Woodbury matrix identity. This extension substantially improves computational efficiency and robustness during incremental learning, making it particularly suitable for high-dimensional and complex data sets. Extensive comparative experiments demonstrate that the DIA-TSK approaches significantly outperform existing incremental learning methods across multiple dynamic data sets, exhibiting notable advantages in terms of computational efficiency, classification accuracy and memory management. In the experimental comparison, O-DIA-TSK and B-DIA-TSK reach significant superiority in classification performance with respect to comparative methods, with up to 33.3% and 55.8% reductions in training time, respectively, demonstrating the advantage of DIA-TSK in classification tasks using dynamic data.
Xiaoqing GuFu-Lai ChungShitong Wang
Suhang GuHangming ShiJie ZhouZhibin JiangMingli LuPeiyi Zhu
Yuanpeng ZhangHisao IshibuchiShitong Wang