Pretrained models for code have exhibited promising performance across various code-related tasks, such as code summarization, code completion, code translation, and bug detection. However, despite their success, the majority of current models still represent code as a token sequence, which may not adequately capture the essence of the underlying code structure.
Lu ZhouZhonglin XiaoZhipeng Ning
Satoru KatsumataMamoru Komachi
Hai-Cheng YiZhu‐Hong YouXiaorui SuDe-Shuang HuangZhen-Hao Guo