This paper presents a novel latent variable recurrent neural network architecture for jointly modeling sequences of words and (possibly latent) discourse relations between adjacent sentences.A recurrent neural network generates individual words, thus reaping the benefits of discriminatively-trained vector representations.The discourse relations are represented with a latent variable, which can be predicted or marginalized, depending on the task.The resulting model can therefore employ a training objective that includes not only discourse relation classification, but also word prediction.As a result, it outperforms state-ofthe-art alternatives for two tasks: implicit discourse relation classification in the Penn Discourse Treebank, and dialog act classification in the Switchboard corpus.Furthermore, by marginalizing over latent discourse relations at test time, we obtain a discourse informed language model, which improves over a strong LSTM baseline.
Ryo MasumuraTaichi AsamiTakanobu ObaHirokazu MasatakiSumitaka SakauchiAkinori Ito
Ryo MasumuraTaichi AsamiTakanobu ObaSumitaka SakauchiAkinori Ito
Xiaobing LiuXiaoye ChenMark GalesPhilip C. Woodland
Will WilliamsNiranjani PrasadDavid MrvaTom AshTony Robinson