Traffic Engineering (TE) is an efficient technique to balance network flows and thus improves the performance of a hybrid Software Defined Network (SDN). Previous TE solutions mainly leverage heuristic algorithms to centrally optimize link weight setting or traffic splitting ratios under the static traffic demand. Note that as the network scale becomes larger and network management gains more complexity, it is notably that the centralized TE methods suffer from a high computation overhead and a long reaction time to optimize routing of flows when the network traffic demand dynamically fluctuates or network failures happen. To enable adaptive and efficient routing in distributed TE, we propose a Multi-agent Reinforcement Learning method CMRL that divides the routing optimization of a large network into multiple small-scale routing decision-making problems. To coordinate the multiple agents for achieving a global optimization goal in a hybrid SDN scenario, we construct a reasonable virtual environment to meet different routing constraints brought by legacy routers and SDN switches for training the routing agents. To train the routing agents for determining the local routing policies according to local network observations, we introduce the difference reward assignment mechanism for encouraging agents to cooperatively take optimal routing action. Extensive simulations conducted on the real traffic traces demonstrate the superiority of CMRL in improving TE performance, especially when traffic demands change or network failures happen.
Yingya GuoMingjie DingWeihong ZhouBin LinCen ChenHuan Luo
Yingya GuoWeipeng WangHan ZhangWenzhong GuoZhiliang WangYing TianXia YinJian Wu
Sheng-Hao ChiangChih-Hang WangDe-Nian YangWanjiun LiaoWen-Tsuen Chen