In this paper, we propose an approach to automatically learning feature embeddings to address the feature sparseness problem for dependency parsing. Inspired by word embeddings, feature embeddings are distributed representations of features that are learned from large amounts of auto-parsed data. Our target is to learn feature embeddings that can not only make full use of well-established hand-designed features but also benefit from the hidden-class representations of features. Based on feature embeddings, we present a set of new features for graph-based dependency parsing models. Experiments on the standard Chinese and English data sets show that the new parser achieves significant performance improvements over a strong baseline.
Weiwei SunJunjie CaoXiaojun Wan
Sattaya SingkulKuntpong Woraratpanya
Weiwei SunJunjie CaoXiaojun Wan
Weiwei SunJunjie CaoXiaojun Wan