JOURNAL ARTICLE

Leveraging pre-trained language models for code generation

Abstract

Abstract Code assistance refers to the utilization of various tools, techniques, and models to help developers in the process of software development. As coding tasks become increasingly complex, code assistant plays a pivotal role in enhancing developer productivity, reducing errors, and facilitating a more efficient coding workflow. This assistance can manifest in various forms, including code autocompletion, error detection and correction, code generation, documentation support, and context-aware suggestions. Language models have emerged as integral components of code assistance, offering developers the capability to receive intelligent suggestions, generate code snippets, and enhance overall coding proficiency. In this paper, we propose new hybrid models for code generation by leveraging pre-trained language models BERT, RoBERTa, ELECTRA, and LUKE with the Marian Causal Language Model. Selecting these models based on their strong performance in various natural language processing tasks. We evaluate the performance of these models on two datasets CoNaLa and DJANGO and compare them to existing state-of-the-art models. We aim to investigate the potential of pre-trained transformer language models to revolutionize code generation, offering improved precision and efficiency in navigating complex coding scenarios. Additionally, conducting error analysis and refining the generated code. Our results show that these models, when combined with the Marian Decoder, significantly improve code generation accuracy and efficiency. Notably, the RoBERTaMarian model achieved a maximum BLEU score of 35.74 and an exact match accuracy of 13.8% on CoNaLa, while LUKE-Marian attained a BLEU score of 89.34 and an exact match accuracy of 78.50% on DJANGO. Implementation of this work is available at https://github.com/AhmedSSoliman/Leveraging-Pretrained-Language-Models-for-Code-Generation .

Keywords:
Code generation Coding (social sciences) Language model Natural language Code (set theory) Code review Software Source code

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
0.57
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Software Engineering Research
Physical Sciences →  Computer Science →  Information Systems
Advanced Malware Detection Techniques
Physical Sciences →  Computer Science →  Signal Processing
Software Engineering Techniques and Practices
Physical Sciences →  Computer Science →  Information Systems

Related Documents

JOURNAL ARTICLE

Leveraging pre-trained language models for code generation

Ahmed SolimanSamir I. ShaheenMayada Hadhoud

Journal:   Greater South Information System Year: 2024
JOURNAL ARTICLE

Leveraging pre-trained language models for code generation

Ahmed SolimanSamir I. ShaheenMayada Hadhoud

Journal:   Complex & Intelligent Systems Year: 2024 Vol: 10 (3)Pages: 3955-3980
JOURNAL ARTICLE

Table Caption Generation in Scholarly Documents Leveraging Pre-trained Language Models

Junjie H. XuKohei ShindenMakoto P. Kato

Journal:   2021 IEEE 10th Global Conference on Consumer Electronics (GCCE) Year: 2021 Pages: 963-966
© 2026 ScienceGate Book Chapters — All rights reserved.