JOURNAL ARTICLE

Curiosity-Driven Exploration for Off-Policy Reinforcement Learning Methods

Abstract

Deep reinforcement learning (DRL) has achieved remarkable results in many high-dimensional continuous control tasks. However, the RL agent still explores the environment randomly, resulting in low exploration efficiency and learning performance, especially in robotic manipulation tasks with sparse rewards. To address this problem, in this paper, we intro-duce a simplified Intrinsic Curiosity Module (S-ICM) into the off-policy RL methods to encourage the agent to pursue novel and surprising states for improving the exploration competence. This method can be combined with an arbitrary off-policy RL algorithm. We evaluate our approach on three challenging robotic manipulation tasks provided by OpenAI Gym. In our experiments, we combined our method with Deep Deterministic Policy Gradient (DDPG) with and without Hindsight Experience Replay (HER). The empirical results show that our proposed method significantly outperforms vanilla RL algorithms both in sample-efficiency and learning performance.

Keywords:
Hindsight bias Reinforcement learning Curiosity Computer science Artificial intelligence Machine learning Competence (human resources) Psychology Cognitive psychology

Metrics

16
Cited By
1.23
FWCI (Field Weighted Citation Impact)
36
Refs
0.84
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Reinforcement Learning in Robotics
Physical Sciences →  Computer Science →  Artificial Intelligence
Robot Manipulation and Learning
Physical Sciences →  Engineering →  Control and Systems Engineering
Robotic Locomotion and Control
Physical Sciences →  Engineering →  Biomedical Engineering
© 2026 ScienceGate Book Chapters — All rights reserved.