KPT++: Refined knowledgeable prompt tuning for few-shot text classification

Shiwen Ni, Hung Yu Kao

Research output: Contribution to journalArticlepeer-review

Abstract

Recently, the new paradigm “pre-train, prompt, and predict” has achieved remarkable few-shot learning achievements compared with the “pre-train, fine-tune” paradigm. Prompt-tuning inserts the prompt text into the input and converts the classification task into a masked language modeling task. One of the key steps is to build a projection between the labels and the label words, i.e., the verbalizer. Knowledgeable prompt-tuning (KPT), which integrates external knowledge into the verbalizer to improve and stabilize prompt-tuning. KPT uses word embeddings and various knowledge graphs to expand the label words space to hundreds of words per class. However, some unreasonable label words in the verbalizer may damage the accuracy. In this paper, a new method called KPT++ is proposed to improve the few-shot text classification. KPT++ is refined knowledgeable prompt-tuning, which can also be regarded as an upgraded version of KPT. Specifically, KPT++ uses two newly proposed prompt grammar refinement (PGR) and probability distribution refinement (PDR) to refine the knowledgeable verbalizer. Extensive experiments on few-shot text classification tasks demonstrate that our KPT++ outperforms state-of-the-art method KPT and other baseline methods. Furthermore, ablation experiments and case studies demonstrate the effectiveness of both PGR and PDR refining methods.

Original languageEnglish
Article number110647
JournalKnowledge-Based Systems
Volume274
DOIs
Publication statusPublished - 2023 Aug 15

All Science Journal Classification (ASJC) codes

  • Software
  • Management Information Systems
  • Information Systems and Management
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'KPT++: Refined knowledgeable prompt tuning for few-shot text classification'. Together they form a unique fingerprint.

Cite this