This paper addresses the issue of generalization for Semantic Parsing in an\nadversarial framework. Building models that are more robust to inter-document\nvariability is crucial for the integration of Semantic Parsing technologies in\nreal applications. The underlying question throughout this study is whether\nadversarial learning can be used to train models on a higher level of\nabstraction in order to increase their robustness to lexical and stylistic\nvariations.We propose to perform Semantic Parsing with a domain classification\nadversarial task without explicit knowledge of the domain. The strategy is\nfirst evaluated on a French corpus of encyclopedic documents, annotated with\nFrameNet, in an information retrieval perspective, then on PropBank Semantic\nRole Labeling task on the CoNLL-2005 benchmark. We show that adversarial\nlearning increases all models generalization capabilities both on in and\nout-of-domain data.\n
Bailin WangMirella LapataIvan Titov
Haoliang LiSinno Jialin PanShiqi WangAlex C. Kot
Huayong LiZizhuo ShenDianqing LiuYanqiu Shao
Ziping WangXiaohang ZhangZhengren LiFei Chen