Yunlong ZhuHaibin DuanZheng WangEun-Hu KimZunwei FuWitold Pedrycz
This article introduces the Bayesian probabilistic fuzzy neural network (BPFNN), a unified architecture designed to overcome the challenges of conventional fuzzy clustering and neural networks in terms of uncertainty, noise, and interpretability. At its core, the Bayesian probabilistic fuzzy $C$ -means (BPFCMs) algorithm is employed to define the hidden-layer nodes, extending traditional FCM through non-Gaussian modeling and posterior inference via Markov chain Monte Carlo (MCMC). By combining Metropolis-Hastings (MHs) for membership updates with Gibbs sampling for parameter estimation, BPFCM yields probabilistic memberships that capture uncertainty in the antecedent rules more effectively than deterministic approaches. Since the hidden-layer activations represent only similarity values between inputs and cluster centers, the original input features are not directly preserved. To compensate, the hidden-to-output connections are formulated as linear functions of the input, ensuring recovery of discriminative information in the consequent rules. These functions are optimized using a generalized cross-entropy (GCE) objective, with iteratively reweighted least squares (IRLSs) employed for efficient and regularized updates. Extensive experiments on benchmark datasets and high-dimensional laser-induced breakdown spectroscopy (LIBS) spectral data confirm that BPFNN consistently surpasses both classical fuzzy systems and contemporary deep learning models, providing improved accuracy, robustness, and interpretability.
Vasileios GeorgiouPh. D. AlevizosMichael N. Vrahatis
Samia Nefti‐MezianiMourad Oussalah