Jinhao JuDeqing YangJingping Liu
Many commonsense knowledge graphs (CKGs) still suffer from incompleteness although they have been applied in many natural language processing tasks successfully. Due to the scale and sparsity of CKGs, existing knowledge base completion models are not still competent for CKGs. In this paper, we propose a commonsense knowledge base completion (CKBC) model which learns the structural representations and contextual representations of CKG nodes and relations, respectively by a relational graph attention network and a pre-trained language model. Based on these two types of representations, the scoring decoder in our model achieves a more accurate prediction for a given triple. Our empirical studies on the representative CKG ConceptNet demonstrate our model's superiority over the state-of-the-art CKBC models.
Fu ZhangYifan DingJingwei Cheng
Hao WangDandan SongZhijing WuYuhang TianYang Pan
Yuxia GengJiaoyan ChenYuhang ZengZhuo ChenWen ZhangJeff Z. PanYuxiang WangYuxiang Wang