Discriminative training for near-synonym substitution

Liang Chih Yu, Hsiu Min Shih, Yu Ling Lai, Jui Feng Yeh, Chung Hsien Wu

Research output: Contribution to conferencePaperpeer-review

7 Citations (Scopus)


Near-synonyms are useful knowledge resources for many natural language applications such as query expansion for information retrieval (IR) and paraphrasing for text generation. However, near-synonyms are not necessarily interchangeable in contexts due to their specific usage and syntactic constraints. Accordingly, it is worth to develop algorithms to verify whether near-synonyms do match the given contexts. In this paper, we consider the near-synonym substitution task as a classification task, where a classifier is trained for each near-synonym set to classify test examples into one of the near-synonyms in the set. We also propose the use of discriminative training to improve classifiers by distinguishing positive and negative features for each nearsynonym. Experimental results show that the proposed method achieves higher accuracy than both pointwise mutual information (PMI) and n-gram-based methods that have been used in previous studies.

Original languageEnglish
Number of pages9
Publication statusPublished - 2010
Event23rd International Conference on Computational Linguistics, Coling 2010 - Beijing, China
Duration: 2010 Aug 232010 Aug 27


Other23rd International Conference on Computational Linguistics, Coling 2010

All Science Journal Classification (ASJC) codes

  • Language and Linguistics
  • Computational Theory and Mathematics
  • Linguistics and Language


Dive into the research topics of 'Discriminative training for near-synonym substitution'. Together they form a unique fingerprint.

Cite this