Edge-Preserving Guided Semantic Segmentation for VIPriors Challenge

Chih-Chung Hsu, Hsin-Ti Ma

Research output: Working paperPreprint

40 Downloads (Pure)


Semantic segmentation is one of the most attractive research fields in computer vision. In the VIPriors challenge, only very limited numbers of training samples are allowed, leading to that the current state-of-the-art and deep learning-based semantic segmentation techniques are hard to train well. To overcome this shortcoming, therefore, we propose edge-preserving guidance to obtain the extra prior information, to avoid the overfitting under small-scale training dataset. First, a two-channeled convolutional layer is concatenated to the last layer of the conventional semantic segmentation network. Then, an edge map is calculated from the ground truth by Sobel operation and followed by concatenating a hard-thresholding operation to indicate whether the pixel is the edge or not. Then, the two-dimensional cross-entropy loss is adopted to calculate the loss between the predicted edge map and its ground truth, termed as an edge-preserving loss. In this way, the continuity of boundaries between different instances can be forced by the proposed edge-preserving loss. Experiments demonstrate that the proposed method can achieve excellent performance under small-scale training set, compared to state-of-the-art semantic segmentation techniques.
Original languageEnglish
Publication statusPublished - 2020 Jul 17


Dive into the research topics of 'Edge-Preserving Guided Semantic Segmentation for VIPriors Challenge'. Together they form a unique fingerprint.

Cite this