Recently, many large language models (LLMs) have been proposed, showing advanced proficiency in code generation. Meanwhile, many efforts have been dedicated to evaluating LLMs on code generation benchmarks such as HumanEval. Although being very helpful for comparing different LLMs, existing evaluation focuses on a simple code generation scenario (i.e., function-level or statement-level code generation), which mainly asks LLMs to generate one single code unit (e.g., a function or a statement) for the given natural language description. Such evaluation focuses on generating independent and often small-scale code units, thus leaving it unclear how LLMs perform in real-world software development scenarios.
Pengyu XueLinhao WuZhen YangC. WangXiang LiY. ZhangJia LiRyan Young JinYifei PeiZhaoyan ShenX. R. LyuJacky Keung
Sangyeop YeoYu‐Seung MaSang Cheol KimHyungkook JunTaeho Kim
HE ZhiminGuohong LiSITU HaozhenYan ZhouZHENG ShenggenLI Lvzhou
Mingjie LiuNathaniel PinckneyBrucek KhailanyHaoxing Ren