Automatic code generation is one of the effective ways to improve the efficiency of software development.Existing research often regards code generation as a sequence-to-sequence task,and the process of fine-tuning of large-scale pre-trained language models is often accompanied by high computing cost.In this paper,a method of prompt learning based parameter-efficient code generation is proposed.This method guides the pre-trained language model to generate code by querying the result which is most similar to the current intent in the code corpus,and most of the parameters of the model are fixed in the process to achieve the effect of reducing computing cost.In order to verify the effectiveness of PPECG,two datasets for code generation are selected in this paper,namely CONCODE and Solidity4CG.The effectiveness of PPECG is verified by calculating the BLEU,CodeBLEU and Exact Match values of the generated results.Experimental results show that PPECG effectively reduces the graphic memory cost during fine-tuning,and is basically close to or even better than the current SOTA method on the above benchmarks,which is capable of completing code generation tasks well.
Hongyu SunYongcai WangWang ChenHaoran DengDeying Li
Peng XuMostofa PatwaryShrimai PrabhumoyeVirginia AdamsRyan PrengerWei PingNayeon LeeMohammad ShoeybiBryan Catanzaro
Hanzhuo TanQi LuoLing JiangZizheng ZhanJing LiHaotian ZhangYuqun Zhang