Near-synonyms are useful knowledge resources for many natural language applications such as query expansion for information retrieval (IR) and paraphrasing for text generation. However, near-synonyms are not necessarily interchangeable in contexts due to their specific usage and syntactic constraints. Accordingly, it is worth to develop algorithms to verify whether near-synonyms do match the given contexts. In this paper, we consider the near-synonym substitution task as a classification task, where a classifier is trained for each near-synonym set to classify test examples into one of the near-synonyms in the set. We also propose the use of discriminative training to improve classifiers by distinguishing positive and negative features for each nearsynonym. Experimental results show that the proposed method achieves higher accuracy than both pointwise mutual information (PMI) and n-gram-based methods that have been used in previous studies.
|出版狀態||Published - 2010|
|事件||23rd International Conference on Computational Linguistics, Coling 2010 - Beijing, China|
持續時間: 2010 8月 23 → 2010 8月 27
|Other||23rd International Conference on Computational Linguistics, Coling 2010|
|期間||10-08-23 → 10-08-27|
All Science Journal Classification (ASJC) codes