Tianbao SongJingbo SunWeiming Peng
Cross-prompt automated essay scoring presents a significant challenge due to substantial differences in samples across prompts, and recent research has concentrated on evaluating distinct essay traits beyond the overall score. The primary approaches aim to enhance the effectiveness of AES in cross-prompt scenarios by improving shared representation or facilitating the transfer of common knowledge between source and target prompts. However, the existing studies only concentrate on the transfer of shared features within essay representation, neglecting the importance of external knowledge, and measuring the degree of commonality across samples remains challenging. Indeed, higher similarity of external knowledge also results in a better shared representation of the essay. Based on this motivation, in this paper, we introduce an extra-essay knowledge similarity transfer to assess sample commonality. Additionally, there is insufficient focus on the intrinsic meaning of the traits being evaluated and their varied impact on the model. Therefore, we incorporate extra-essay knowledge representation to enhance understanding of the essay under evaluation and the target of the task. Experimental results demonstrate that our approach outperforms other baseline models on the ASAP++ datasets, confirming the effectiveness of our method.
Heejin DoYunsu KimGary Geunbae Lee
J. XuJian LiuMing‐Wei LinJiayin LinShenbao YuLiang ZhaoJun Shen
Sohaila EltanboulySalam AlbatarniTamer Elsayed
Po-Kai ChenBo-Wei TsaiWei ShiChien-Yao WangJia-Ching WangYi-Ting Huang