JOURNAL ARTICLE

Asking Clarification Questions in Knowledge-Based Question Answering

Abstract

The ability to ask clarification questions is essential for knowledge-based question answering (KBQA) systems, especially for handling ambiguous phenomena.Despite its importance, clarification has not been well explored in current KBQA systems.Further progress requires supervised resources for training and evaluation, and powerful models for clarification-related text understanding and generation.In this paper, we construct a new clarification dataset, CLAQUA, with nearly 40K open-domain examples.The dataset supports three serial tasks: given a question, identify whether clarification is needed; if yes, generate a clarification question; then predict answers base on external user feedback.We provide representative baselines for these tasks and further introduce a coarse-to-fine model for clarification question generation.Experiments show that the proposed model achieves better performance than strong baselines.The further analysis demonstrates that our dataset brings new challenges and there still remain several unsolved problems, like reasonable automatic evaluation metrics for clarification question generation and powerful models for handling entity sparsity. 1* The work was done while Jingjing Xu and Yuechen Wang were interns in Microsoft Research, Asia.1 The dataset and code

Keywords:
Question answering Construct (python library) Ask price Knowledge base Questions and answers

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
0.37
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence
Expert finding and Q&A systems
Physical Sciences →  Computer Science →  Information Systems
Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.