Edge-Preserving Guided Semantic Segmentation for VIPriors Challenge

Chih-Chung Hsu, Hsin-Ti Ma

研究成果: ???researchoutput.researchoutputtypes.workingpaper.preprint???


Semantic segmentation is one of the most attractive research fields in computer vision. In the VIPriors challenge, only very limited numbers of training samples are allowed, leading to that the current state-of-the-art and deep learning-based semantic segmentation techniques are hard to train well. To overcome this shortcoming, therefore, we propose edge-preserving guidance to obtain the extra prior information, to avoid the overfitting under small-scale training dataset. First, a two-channeled convolutional layer is concatenated to the last layer of the conventional semantic segmentation network. Then, an edge map is calculated from the ground truth by Sobel operation and followed by concatenating a hard-thresholding operation to indicate whether the pixel is the edge or not. Then, the two-dimensional cross-entropy loss is adopted to calculate the loss between the predicted edge map and its ground truth, termed as an edge-preserving loss. In this way, the continuity of boundaries between different instances can be forced by the proposed edge-preserving loss. Experiments demonstrate that the proposed method can achieve excellent performance under small-scale training set, compared to state-of-the-art semantic segmentation techniques.
出版狀態Published - 2020 七月 17


深入研究「Edge-Preserving Guided Semantic Segmentation for VIPriors Challenge」主題。共同形成了獨特的指紋。