An explosion in the popularity of transformer-based language models (such as GPT-3, BERT, RoBERTa, and ALBERT) has opened the doors to new machine learning applications involving language modeling, text generation, and more. However, recent scrutiny reveals that these language models contain inherent biases towards certain demographics reflected in their training data. While research has tried mitigating this problem, existing approaches either fail to remove the bias completely, degrade performance ("catastrophic forgetting"), or are costly to execute. This work examines how to reduce gender bias in a GPT-2 language model by fine-tuning less than 1% of its parameters. Through quantitative benchmarks, we show that this is a viable way to reduce prejudice in pre-trained language models while remaining cost-effective at scale.
Somayeh GhanbarzadehYan HuangHamid PalangiRadames Cruz MorenoHamed Khanpour
Mingjun ZhouZhuoma DaiqingNuo QunNyima Tashi
Ning DingYujia QinGuang YangFuchao WeiZonghan YangYusheng SuShengding HuYulin ChenChi-Min ChanWeize ChenJing YiWeilin ZhaoXiaozhi WangZhiyuan LiuHai-Tao ZhengJianfei ChenYang LiuJie TangJuanzi LiMaosong Sun
Ting JiangDeqing WangFuzhen ZhuangRuobing XieFeng Xia
Linlin ZhangYang LiuQingyu ShangZhiqiang YanYue ZhouYunpeng MenYunfeng LiWeiqi Ren