Qingxiu DongDamai DaiYifan SongJun XuZhifang SuiLei Li
Previous literature has proved that Pretrained Language Models (PLMs) can store factual knowledge.However, we find that facts stored in the PLMs are not always correct.It motivates us to explore a fundamental question: How do we calibrate factual knowledge in PLMs without re-training from scratch?In this work, we propose a simple and lightweight method CA-LINET to achieve this goal.To be specific, we first detect whether PLMs can learn the right facts via a contrastive score between right and fake facts.If not, we then use a lightweight method to add and adapt new parameters to specific factual texts.Experiments on the knowledge probing task show the calibration effectiveness and efficiency.In addition, through closed-book question answering, we find that the calibrated PLM possesses knowledge generalization ability after fine-tuning.Beyond the calibration performance, we further investigate and visualize the knowledge calibration mechanism.The code and data are available at
Qingxiu DongDamai DaiYifan SongJingjing XuZhifang SuiLei Li
Qingxiu DongDamai DaiYifan SongJun XuZhifang SuiLei Li
Constanza FierroAnders Søgaard