JOURNAL ARTICLE

Towards Efficient and Privacy-Preserving Federated Learning for HMM Training

Abstract

The hidden Markov model (HMM) has played a pivotal role in various IoT applications due to its ability to model time-varying sequences. Since the datasets usually live in isolated islands and their privacy naturally demands to be seriously considered, the HMM should be trained in a privacy-preserving manner. A typical HMM training framework is federated learning, in which a federated server and many data owners collaboratively train an HMM without revealing data owners' data to the federated server and the trained model to data owners. Since existing HMM training schemes are computationally intensive, we propose an efficient and privacy-preserving federated learning scheme for HMM training to address the efficiency issue in this paper. First, we transform all HMM training computations into matrices- and vectors-based computations over real domains. Then, we introduce our federated HMM training scheme by applying matrix encryption to protect the HMM training privacy. After that, we show that our scheme is privacy-preserving through a rigorous analysis on the security of our scheme. We illustrate that our scheme is efficient through extensive experimental evaluation on the performance of our scheme.

Keywords:
Computer science Hidden Markov model Training (meteorology) Federated learning Training set Artificial intelligence Computer security

Metrics

1
Cited By
0.26
FWCI (Field Weighted Citation Impact)
15
Refs
0.61
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Cryptography and Data Security
Physical Sciences →  Computer Science →  Artificial Intelligence
Privacy-Preserving Technologies in Data
Physical Sciences →  Computer Science →  Artificial Intelligence
Stochastic Gradient Optimization Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.