This paper studies how to automatically generate a natural language text that describes the facts in knowledge graph (KG).Considering the few-shot setting, we leverage the excellent capacities of pretrained language models (PLMs) in language understanding and generation.We make three major technical contributions, namely representation alignment for bridging the semantic gap between KG encodings and PLMs, relation-biased KG linearization for deriving better input representations, and multi-task learning for learning the correspondence between KG and text.Extensive experiments on three benchmark datasets have demonstrated the effectiveness of our model on KG-to-text generation task.In particular, our model outperforms all comparison methods on both fully-supervised and fewshot settings.Our code and datasets are available at https:
Junyi LiTianyi TangWayne Xin ZhaoZihao WeiNicholas Jing YuanJi-Rong Wen
Junyi LiTianyi TangWayne Xin ZhaoZihao WeiNicholas Jing YuanJi-Rong Wen
Jiannan XiangZhengzhong LiuYucheng ZhouEric P. XingZhiting Hu