In recent years there have been many successes in boosting the performance of Deep Q-Networks (DQN). Dueling DQN uses simple dueling architecture but significantly improves the performance of DQN [1]. However, Dueling DQN is only concerned about dueling in estimating Q-values. In this paper, we introduce a state representation dueling network, which provides an auxiliary task designed to be combined with other reinforcement learning algorithms to improve the performance of Deep RL. The state representation dueling network is designed to be beneficial for solving reinforcement learning tasks with high dimensional observation, such as camera input. The experiment shows that adding the state representation dueling network to Dueling DQN improves both the training speed and performance of Dueling DQN in CartPole environment.
Ziyu WangTom SchaulMatteo HesselHado van HasseltMarc LanctotNando de Freitas
Qiaoyuan XiangXiaoyu LiangXiao Yu-xingZhi Zhang
Jian ZhaoWengang ZhouTianyu ZhaoYun ZhouHouqiang Li
Tim de BruinJens KoberKarl TuylsRobert Babuška