JOURNAL ARTICLE

Past Data-Driven Adaptation in Hierarchical Reinforcement Learning

Abstract

Reinforcement learning algorithms struggle with tasks that have complex hierarchical dependency structures. For this problem, humans usually represent the whole task in a structured way and solve it layer by layer. In this paper, we propose a novel approach called Past Data-Driven Adaptation in Hierarchical Reinforcement Learning (AdaHRL). AdaHRL leverages 'past samples' from a replay buffer to discover subgoals and construct a subgoal tree, effectively steering the agent's learning trajectory. Simultaneously, AdaHRL fine-tunes the data distribution of the entire replay buffer using a filter function, empowering adaptive learning within the agent. Experimental results demonstrate that our approach outperforms Unified Model-Free HRL Framework (UHRL) and Hindsight experience replay (HER) in tasks with complex hierarchical dependencies.

Keywords:
Reinforcement learning Computer science Adaptation (eye) Dependency (UML) Artificial intelligence Task (project management) Construct (python library) Tree (set theory) Layer (electronics) Machine learning Trajectory

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
9
Refs
0.07
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Reinforcement Learning in Robotics
Physical Sciences →  Computer Science →  Artificial Intelligence
Data Stream Mining Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
Explainable Artificial Intelligence (XAI)
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.