JOURNAL ARTICLE

Dictionary Learning-Structured Reinforcement Learning With Adaptive-Sparsity Regularizer

Zhenni LiJianhao TangHaoli ZhaoCi ChenShengli Xie

Year: 2023 Journal:   IEEE Transactions on Aerospace and Electronic Systems Vol: 60 (2)Pages: 1753-1769   Publisher: Institute of Electrical and Electronics Engineers

Abstract

Deep reinforcement learning (DRL) has been applied to satellite navigation and positioning applications. Its performance relies heavily on the function-approximation capability of deep neural networks. However, existing DRL models suffer from catastrophic interference, resulting in inaccurate function approximation. The sparse-coding-based DRL is an effective method to mitigating this interference, but existing methods involve the following two challenging issues: first, the value function estimation network suffers from instability problems with gradient backpropagation, including gradient explosion and gradient vanishing, second, existing methods are limited to using hand-crafted sparse regularizers that produce only static sparsity, which may be difficult to apply in various dynamic reinforcement learning (RL) environments. In this article, we propose a novel dictionary learning (DL)-structured RL model with adaptive-sparsity regularizer (ASR) that alleviates the catastrophic interference and enables accurate value function approximation, thereby improving the RL performance. To alleviate the interference and avoid the instability problems in RL, a feedforward DL-structured RL model is constructed to predict the value function without the need for gradient backpropagation. To learn data-driven sparse representations with adaptive sparsity, we propose to use the learnable sparse regularizer ASR in the model, where the key hyperparameters of ASR can be trained to be adaptive to variable RL environments. To optimize the model efficiently, the model parameters are first pretrained in the pretraining stage, with only the value weights used for value function approximation needing to be fine-tuned for actual RL applications in the control training stage. Our comparative experiments in benchmark environments demonstrate that the proposed method can outperform existing state-of-the-art sparse-coding-based RL algorithms. In terms of accumulated rewards (used to measure the quality of the learned policy), the improvement was over 63% in Cart Pole environment and nearly 10% for Puddle World. Furthermore, the proposed algorithm can maintain its relatively high performance in the presence of noise up to 20 dB.

Keywords:
Reinforcement learning Computer science Artificial intelligence Adaptive learning Machine learning Pattern recognition (psychology)

Metrics

3
Cited By
0.77
FWCI (Field Weighted Citation Impact)
39
Refs
0.74
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Machine Learning and ELM
Physical Sciences →  Computer Science →  Artificial Intelligence
Elevator Systems and Control
Physical Sciences →  Engineering →  Control and Systems Engineering
Adaptive Dynamic Programming Control
Physical Sciences →  Computer Science →  Computational Theory and Mathematics

Related Documents

JOURNAL ARTICLE

Discriminative structured dictionary learning with hierarchical group sparsity

Yong XuYuping SunYuhui QuanBo Zheng

Journal:   Computer Vision and Image Understanding Year: 2015 Vol: 136 Pages: 59-68
JOURNAL ARTICLE

Structured dictionary learning based on group sparsity

Jingfeng GuoXian Li

Journal:   Journal of Image and Graphics Year: 2012 Vol: 17 (11)Pages: 1347-1352
JOURNAL ARTICLE

Learning with Structured Sparsity

HuangJunzhouZhangTongMetaxasDimitris

Journal:   Journal of Machine Learning Research Year: 2011
© 2026 ScienceGate Book Chapters — All rights reserved.