Fully used reliable data and attention consistency for semi-supervised learning

Jui Hung Chang, Hsiu Chen Weng

研究成果: Article同行評審


Large labeled datasets represent human labor's costly consumption of resources. Therefore, semi-supervised learning leverages a large amount of unlabeled data to improve the training results in limited labels. Many methods of semi-supervised learning utilize diverse data augmentations to improve model learning and the classification rule from these changes, requiring models to spend a lot of time to adapt to the changes. Besides, reducing the noise in trained unlabeled data is also an issue that is often discussed in semi-supervised learning so that the inference from error predictions can be reduced. It may define that the data, of which the probability predicted from the model is higher than a threshold, as confident and then only train on those high-confidence unlabeled data so that the model avoids the influence from deviation of the error caused by unlabeled data predictions. However, it also leads to the fact that many unlabeled data cannot be effectively used. Thus, this study proposes a semi-supervised framework, including Attention Consistency (AC) and One Supervised (OS) algorithms, which improves efficiency and performance of the model learning by guiding the model to pay attention to classified features and judging whether the model cannot be effectively trained in existing reliable data. This way, the model fully uses unlabeled data to train. The experiment results and comparisons show that similar results can be reached using other methods within a shorter training process. This paper also analyzes the distribution of feature results and proposes a new measurement to find out distribution information.

期刊Knowledge-Based Systems
出版狀態Published - 2022 8月 5

All Science Journal Classification (ASJC) codes

  • 管理資訊系統
  • 軟體
  • 資訊系統與管理
  • 人工智慧


深入研究「Fully used reliable data and attention consistency for semi-supervised learning」主題。共同形成了獨特的指紋。