Abstract

Collecting and annotating task-oriented dialogues is time-consuming and costly; thus, zero and few shot learning could greatly benefit dialogue state tracking (DST). In this work, we propose an in-context learning (ICL) framework for zero-shot and few-shot learning DST, where a large pre-trained language model (LM) takes a test instance and a few exemplars as input, and directly decodes the dialogue state without any parameter updates. To better leverage a tabular domain description in the LM prompt, we reformulate DST into a text-to-SQL problem. We also propose a novel approach to retrieve annotated dialogues as exemplars. Empirical results on MultiWOZ show that our method IC-DST substantially outperforms previous fine-tuned state-of-the-art models in few-shot settings. In addition, we test IC-DST in zero-shot settings, in which the model only takes a fixed task instruction as input, finding that it outperforms previous zero-shot methods by a large margin.

Keywords:
Computer science Leverage (statistics) Artificial intelligence Scalability Context (archaeology) Shot (pellet) Selection (genetic algorithm) Task (project management) Machine learning State (computer science) Natural language processing Programming language

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
0.12
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Hermeneutics and Narrative Identity
Social Sciences →  Arts and Humanities →  Philosophy
Aging, Elder Care, and Social Issues
Health Sciences →  Health Professions →  General Health Professions
Health, Medicine and Society
Health Sciences →  Health Professions →  General Health Professions
© 2026 ScienceGate Book Chapters — All rights reserved.