Learning Classifier Systems (LCSs) have been widely used to tackle Reinforcement Learning (RL) problems as they have a good generalization ability and provide a simple understandable rule-based solution. The accuracy-based LCS, XCS, has been most popularly used for single-objective RL problems. As many real-world problems exhibit multiple conflicting objectives recent work has sought to adapt XCS to Multi-Objective Reinforcement Learning (MORL) tasks. However, many of these algorithms need large storage or cannot discover the Pareto Optimal solutions. This is due to the complexity of finding a policy having multiple steps to multiple possible objectives. This paper aims to employ a decomposition strategy based on MOEA/D in XCS to approximate complex Pareto Fronts. In order to achieve multi-objective learning, a new MORL algorithm has been developed based on XCS and MOEA/D. The experimental results show that on complex bi-objective maze problems our MORL algorithm is able to learn a group of Pareto optimal solutions for MORL problems without huge storage. Analysis of the learned policies shows successful trade-offs between the distance to the reward versus the amount of reward itself.
Jing WangYuxin ZhengZiyun ZhangHu PengHui Wang
Ming ZongJingjing LiZhengjun WangWanrong Tan
Weikang NingBaolong GuoXinxing GuoCheng LiYunyi Yan
Yupeng HanHu PengChangrong MeiLianglin CaoChangshou DengHui WangZhijian Wu