JOURNAL ARTICLE

Complex-valued reinforcement learning with hierarchical architecture

Abstract

Hierarchical complex-valued reinforcement learning is proposed in order to solve the perceptual aliasing problem. The perceptual aliasing problem is encountered when an incomplete set of sensors is used in an actual environment, and this problem makes learning difficult for an agent. Hierarchical Q-learning (HQ-learning) and complex-valued reinforcement learning are proposed in order to solve this problem. HQ-learning is a hierarchical extension of Q-learning. In HQ-learning, tasks are divided into sequences of simpler sub-tasks that can be solved by adopting memory-less policies, but a considerable amount of time is required for learning. In complex-valued reinforcement learning, the dependence of contexts can be represented by using complex-valued action-value functions. It enables the agent to adaptively perform actions, but may not deal problems because of the cycle of perceptual aliasing. In this paper, complex-valued reinforcement learning based on HQ-learning with a hierarchical design is proposed. Experimental results show the effectiveness of the proposed method.

Keywords:
Reinforcement learning Aliasing Computer science Artificial intelligence Set (abstract data type) Learning classifier system Q-learning Perception Unsupervised learning Machine learning

Metrics

2
Cited By
0.80
FWCI (Field Weighted Citation Impact)
6
Refs
0.79
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Evolutionary Algorithms and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence
Neural Networks and Reservoir Computing
Physical Sciences →  Computer Science →  Artificial Intelligence
Neural Networks and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.