JOURNAL ARTICLE

Injecting Commonsense Knowledge into Prompt Learning for Zero-Shot Text Classification

Abstract

The combination of pre-training and fine-tuning has become a default solution to Natural Language Processing (NLP) tasks. The emergence of prompt learning breaks such routine, especially in the scenarios of low data resources. Insufficient labelled data or even unseen classes are frequent problems in text classification, equipping Pre-trained Language Models (PLMs) with task-specific prompts helps get rid of the dilemma. However, general PLMs are barely provided with commonsense knowledge. In this work, we propose a KG-driven verbalizer that leverages commonsense Knowledge Graph (KG) to map label words with predefined classes. Specifically, we transform the mapping relationships into semantic relevance in the commonsense-injected embedding space. For zero-shot text classification task, experimental results exhibit the effectiveness of our KG-driven verbalizer on a Twitter dataset for natural disasters (i.e. HumAID) compared with other baselines.

Keywords:
Commonsense knowledge Shot (pellet) Computer science Zero (linguistics) Commonsense reasoning Natural language processing Artificial intelligence One shot Information retrieval Knowledge-based systems Linguistics Engineering Philosophy

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
27
Refs
0.11
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence
Misinformation and Its Impacts
Social Sciences →  Social Sciences →  Sociology and Political Science
Natural Language Processing Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.