Recurrent neural network (RNN) and convolutional neural network (CNN) are two prevailing architectures used in text classification. Traditional approaches combine the strengths of these two networks by straightly streamlining them or linking features extracted from them. In this article, a novel approach is proposed to maintain the strengths of RNN and CNN to a great extent. In the proposed approach, a bi-directional RNN encodes each word into forward and backward hidden states. Then, a neural tensor layer is used to fuse bi-directional hidden states to get word representations. Meanwhile, a convolutional neural network is utilized to learn the importance of each word for text classification. Empirical experiments are conducted on several datasets for text classification. The superior performance of the proposed approach confirms its effectiveness.
Ruishuang WangZhao LiJian CaoTong ChenLei Wang
Siwei LaiLiheng XuKang LiuJun Zhao
Sara Muslih MishalMurtadha M. Hamad
Keunwoo ChoiGyörgy FazekasM. SandlerKyunghyun Cho