JOURNAL ARTICLE

Reinforcement Learning with Deep Deterministic Policy Gradient

Abstract

This study reviews the major developments of Deep Deterministic Policy Gradient (DDPG) in the field of reinforcement learning. It is innovated by Deep Q-network ideas and can finally handle some much challenging problems that operate over continuous action space. The main idea of DDPG is to use an actor-critic architecture (shown in Figure 5) to learn much more competitive policies. It allows the model to use neural network function approximators to learn in large state and action spaces. Due to its strong capacity, DDPG has many useful applications to real world problems in the field like robotics and control systems. But like most of the model-free reinforcement learning methods, the requirement for a large number of training steps is still a major difficulty for DDPG.

Keywords:
Reinforcement learning Artificial intelligence Computer science Action (physics) Field (mathematics) Artificial neural network Function (biology) State space Robotics Control (management) Machine learning Robot Mathematics

Metrics

64
Cited By
4.52
FWCI (Field Weighted Citation Impact)
22
Refs
0.95
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Reinforcement Learning in Robotics
Physical Sciences →  Computer Science →  Artificial Intelligence
Adaptive Dynamic Programming Control
Physical Sciences →  Computer Science →  Computational Theory and Mathematics
Adversarial Robustness in Machine Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.