Zhuocheng GongJiahao LiuQifan WangYang� YangJingang WangWei WuYunsen XianDongyan ZhaoRui Yan
While transformer-based pre-trained language models (PLMs) have dominated a number of NLP applications, these models are heavy to deploy and expensive to use.Therefore, effectively compressing large-scale PLMs becomes an increasingly important problem.Quantization, which represents high-precision tensors with low-bit fix-point format, is a viable solution.However, most existing quantization methods are task-specific, requiring customized training and quantization with a large number of trainable parameters on each individual task.Inspired by the observation that the overparameterization nature of PLMs makes it possible to freeze most of the parameters during the fine-tuning stage, in this work, we propose a novel "quantize before fine-tuning" framework, PreQuant, that differs from both quantizationaware training and post-training quantization.PreQuant is compatible with various quantization strategies, with outlier-aware parameterefficient fine-tuning incorporated to correct the induced quantization error.We demonstrate the effectiveness of PreQuant on the GLUE benchmark using BERT, RoBERTa, and T5.We also provide an empirical investigation into the workflow of PreQuant, which sheds light on its efficacy.
Zhuocheng GongJiahao LiuQifan WangYang YangJingang WangWei WuYunsen XianDongyan ZhaoRui Yan
Zhuocheng GongJiahao LiuQifan WangYang� YangJingang WangWei WuYunsen XianDongyan ZhaoRui Yan
Kai-Sheng TeongLay-Ki SoonTin Tin Su
Chaofan TaoLu HouWei ZhangLifeng ShangXin JiangQun LiuPing LuoNgai Wong