JOURNAL ARTICLE

Prompting and Fine-tuning Pre-trained Generative Language Models

Abstract

There has been an explosion of available pre-trained and fine-tuned Generative Language Models (LM). They vary in the number of parameters, architecture, training strategy, and training set size. Aligned with it, alternative strategies exist to exploit these models, such as Fine-tuning and Prompt Engineering. However, many questions may arise throughout this process: Which model to apply for a given task? Which strategies to use? Will Prompt Engineering solve all tasks? What are the computational and financial costs involved? This tutorial will introduce and explore typical modern LM architectures with a hands-on approach to the available strategies.

Keywords:
Computer science Generative grammar Task (project management) Exploit Set (abstract data type) Process (computing) Fine-tuning Architecture Artificial intelligence Language model Generative model Machine learning Software engineering Programming language Systems engineering Engineering

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
15
Refs
0.14
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Natural Language Processing Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence
Speech Recognition and Synthesis
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.