Junhao ZhengHaibin ChenQianli Ma
Cross-domain NER is a practical yet challenging problem since the data\nscarcity in the real-world scenario. A common practice is first to learn a NER\nmodel in a rich-resource general domain and then adapt the model to specific\ndomains. Due to the mismatch problem between entity types across domains, the\nwide knowledge in the general domain can not effectively transfer to the target\ndomain NER model. To this end, we model the label relationship as a probability\ndistribution and construct label graphs in both source and target label spaces.\nTo enhance the contextual representation with label structures, we fuse the\nlabel graph into the word embedding output by BERT. By representing label\nrelationships as graphs, we formulate cross-domain NER as a graph matching\nproblem. Furthermore, the proposed method has good applicability with\npre-training methods and is potentially capable of other cross-domain\nprediction tasks. Empirical results on four datasets show that our method\noutperforms a series of transfer learning, multi-task learning, and few-shot\nlearning methods.\n
Hoang-Van NguyenFrancesco GelliSoujanya Poria
Zihan LiuYan XuTiezheng YuWenliang DaiZiwei JiSamuel CahyawijayaAndrea MadottoPascale Fung
Shuguang ChenGustavo AguilarLeonardo NevesThamar Solorio