Parameter-efficient tuning (PET) methods fit pre-trained language models (PLMs) to downstream tasks by either computing a small compressed update for a subset of model parameters, or appending and fine-tuning a small number of new model parameters to the pre-trained network. Hand-designed PET architectures from the literature perform well in practice, but have the potential to be improved via automated neural architecture search (NAS). We propose an efficient NAS method for learning PET architectures via structured and unstructured pruning. We present experiments on GLUE demonstrating the effectiveness of our algorithm and discuss how PET architectural design choices affect performance in practice.
Ning DingYujia QinGuang YangFuchao WeiZonghan YangYusheng SuShengding HuYulin ChenChi-Min ChanWeize ChenJing YiWeilin ZhaoXiaozhi WangZhiyuan LiuHai-Tao ZhengJianfei ChenYang LiuJie TangJuanzi LiMaosong Sun
Kelly LangaHairong WangOlaperi Okuboyejo
Michael GiraRuisu ZhangKangwook Lee
Yiwen TangRay ZhangZoey GuoXianzheng MaBin ZhaoZhigang WangDong WangXuelong Li