TY - JOUR
T1 - MSCS
T2 - Multiscale Consistency Supervision With CNN-Transformer Collaboration for Semisupervised Histopathology Image Semantic Segmentation
AU - Hsieh, Min En
AU - Chiou, Chien Yu
AU - Tsai, Hung-Wen
AU - Chang, Yu Cheng
AU - Chung, Pau Choo
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - This study proposes a multiscale consistency supervision (MSCS) strategy that combines a semisupervised learning approach with multimagnification learning to ease the labeling load and improve the prediction accuracy of histopathology image semantic segmentation. The MSCS strategy incorporates multiview complementary information into the semisupervised learning process, where this information includes that obtained from multiscale views (i.e., cells and tissues) and encoders with different decision perspectives. The strategy is implemented through the collaboration between convolutional neural network (CNN) and Transformer encoders, where the former encoder excels at capturing local spatial relationships in the input images and the latter encoder excels at capturing global relationships. In the proposed approach, the learning process is performed using two asymmetric multiscale fusion networks, designated as MSUnetFusion and MSUSegFormer. MSUnetFusion learns the cell-level features using CNN and the tissue-level features using Transformer. In contrast, MSUSegFormer learns both features using only Transformer. MSCS enforces prediction consistency between the two networks to enhance the prediction performance for unlabeled training data. The experimental results show that MSCS outperforms both supervised and semisupervised methods for the segmentation of hepatocellular carcinoma (HCC) and colorectal cancer (CRC) datasets, even when only limited labeled data are available. Overall, MSCS appears to provide a promising solution for histopathology image semantic segmentation.
AB - This study proposes a multiscale consistency supervision (MSCS) strategy that combines a semisupervised learning approach with multimagnification learning to ease the labeling load and improve the prediction accuracy of histopathology image semantic segmentation. The MSCS strategy incorporates multiview complementary information into the semisupervised learning process, where this information includes that obtained from multiscale views (i.e., cells and tissues) and encoders with different decision perspectives. The strategy is implemented through the collaboration between convolutional neural network (CNN) and Transformer encoders, where the former encoder excels at capturing local spatial relationships in the input images and the latter encoder excels at capturing global relationships. In the proposed approach, the learning process is performed using two asymmetric multiscale fusion networks, designated as MSUnetFusion and MSUSegFormer. MSUnetFusion learns the cell-level features using CNN and the tissue-level features using Transformer. In contrast, MSUSegFormer learns both features using only Transformer. MSCS enforces prediction consistency between the two networks to enhance the prediction performance for unlabeled training data. The experimental results show that MSCS outperforms both supervised and semisupervised methods for the segmentation of hepatocellular carcinoma (HCC) and colorectal cancer (CRC) datasets, even when only limited labeled data are available. Overall, MSCS appears to provide a promising solution for histopathology image semantic segmentation.
UR - http://www.scopus.com/inward/record.url?scp=85201315272&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85201315272&partnerID=8YFLogxK
U2 - 10.1109/TAI.2024.3443794
DO - 10.1109/TAI.2024.3443794
M3 - Article
AN - SCOPUS:85201315272
SN - 2691-4581
VL - 5
SP - 6356
EP - 6368
JO - IEEE Transactions on Artificial Intelligence
JF - IEEE Transactions on Artificial Intelligence
IS - 12
ER -