KPT++: Refined knowledgeable prompt tuning for few-shot text classification

Shiwen Ni, Hung Yu Kao

研究成果: Article同行評審

7 引文 斯高帕斯(Scopus)

摘要

Recently, the new paradigm “pre-train, prompt, and predict” has achieved remarkable few-shot learning achievements compared with the “pre-train, fine-tune” paradigm. Prompt-tuning inserts the prompt text into the input and converts the classification task into a masked language modeling task. One of the key steps is to build a projection between the labels and the label words, i.e., the verbalizer. Knowledgeable prompt-tuning (KPT), which integrates external knowledge into the verbalizer to improve and stabilize prompt-tuning. KPT uses word embeddings and various knowledge graphs to expand the label words space to hundreds of words per class. However, some unreasonable label words in the verbalizer may damage the accuracy. In this paper, a new method called KPT++ is proposed to improve the few-shot text classification. KPT++ is refined knowledgeable prompt-tuning, which can also be regarded as an upgraded version of KPT. Specifically, KPT++ uses two newly proposed prompt grammar refinement (PGR) and probability distribution refinement (PDR) to refine the knowledgeable verbalizer. Extensive experiments on few-shot text classification tasks demonstrate that our KPT++ outperforms state-of-the-art method KPT and other baseline methods. Furthermore, ablation experiments and case studies demonstrate the effectiveness of both PGR and PDR refining methods.

原文English
文章編號110647
期刊Knowledge-Based Systems
274
DOIs
出版狀態Published - 2023 8月 15

All Science Journal Classification (ASJC) codes

  • 軟體
  • 管理資訊系統
  • 資訊系統與管理
  • 人工智慧

指紋

深入研究「KPT++: Refined knowledgeable prompt tuning for few-shot text classification」主題。共同形成了獨特的指紋。

引用此