Large labeled datasets represent human labor's costly consumption of resources. Therefore, semi-supervised learning leverages a large amount of unlabeled data to improve the training results in limited labels. Many methods of semi-supervised learning utilize diverse data augmentations to improve model learning and the classification rule from these changes, requiring models to spend a lot of time to adapt to the changes. Besides, reducing the noise in trained unlabeled data is also an issue that is often discussed in semi-supervised learning so that the inference from error predictions can be reduced. It may define that the data, of which the probability predicted from the model is higher than a threshold, as confident and then only train on those high-confidence unlabeled data so that the model avoids the influence from deviation of the error caused by unlabeled data predictions. However, it also leads to the fact that many unlabeled data cannot be effectively used. Thus, this study proposes a semi-supervised framework, including Attention Consistency (AC) and One Supervised (OS) algorithms, which improves efficiency and performance of the model learning by guiding the model to pay attention to classified features and judging whether the model cannot be effectively trained in existing reliable data. This way, the model fully uses unlabeled data to train. The experiment results and comparisons show that similar results can be reached using other methods within a shorter training process. This paper also analyzes the distribution of feature results and proposes a new measurement to find out distribution information.
All Science Journal Classification (ASJC) codes