TY - JOUR
T1 - Semantic Context-Aware Image Style Transfer
AU - Liao, Yi Sheng
AU - Huang, Chun Rong
N1 - Funding Information:
This work was supported in part by the Ministry of Science and Technology of Taiwan under Grant MOST 110-2221-E-005-070, Grant MOST110-2634-F-006-022, and Grant MOST110-2327-B-006-006.
Publisher Copyright:
© 1992-2012 IEEE.
PY - 2022
Y1 - 2022
N2 - To provide semantic image style transfer results which are consistent with human perception, transferring styles of semantic regions of the style image to their corresponding semantic regions of the content image is necessary. However, when the object categories between the content and style images are not the same, it is difficult to match semantic regions between two images for semantic image style transfer. To solve the semantic matching problem and guide the semantic image style transfer based on matched regions, we propose a novel semantic context-aware image style transfer method by performing semantic context matching followed by a hierarchical local-to-global network architecture. The semantic context matching aims to obtain the corresponding regions between the content and style images by using context correlations of different object categories. Based on the matching results, we retrieve semantic context pairs where each pair is composed of two semantically matched regions from the content and style images. To achieve semantic context-aware style transfer, a hierarchical local-to-global network architecture, which contains two sub-networks including the local context network and the global context network, is proposed. The former focuses on style transfer for each semantic context pair from the style image to the content image, and generates a local style transfer image storing the detailed style feature representations for corresponding semantic regions. The latter aims to derive the stylized image by considering the content, the style, and the intermediate local style transfer images, so that inconsistency between different corresponding semantic regions can be addressed and solved. The experimental results show that the stylized results using our method are more consistent with human perception compared with the state-of-the-art methods.
AB - To provide semantic image style transfer results which are consistent with human perception, transferring styles of semantic regions of the style image to their corresponding semantic regions of the content image is necessary. However, when the object categories between the content and style images are not the same, it is difficult to match semantic regions between two images for semantic image style transfer. To solve the semantic matching problem and guide the semantic image style transfer based on matched regions, we propose a novel semantic context-aware image style transfer method by performing semantic context matching followed by a hierarchical local-to-global network architecture. The semantic context matching aims to obtain the corresponding regions between the content and style images by using context correlations of different object categories. Based on the matching results, we retrieve semantic context pairs where each pair is composed of two semantically matched regions from the content and style images. To achieve semantic context-aware style transfer, a hierarchical local-to-global network architecture, which contains two sub-networks including the local context network and the global context network, is proposed. The former focuses on style transfer for each semantic context pair from the style image to the content image, and generates a local style transfer image storing the detailed style feature representations for corresponding semantic regions. The latter aims to derive the stylized image by considering the content, the style, and the intermediate local style transfer images, so that inconsistency between different corresponding semantic regions can be addressed and solved. The experimental results show that the stylized results using our method are more consistent with human perception compared with the state-of-the-art methods.
UR - http://www.scopus.com/inward/record.url?scp=85124761280&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85124761280&partnerID=8YFLogxK
U2 - 10.1109/TIP.2022.3149237
DO - 10.1109/TIP.2022.3149237
M3 - Article
C2 - 35143399
AN - SCOPUS:85124761280
SN - 1057-7149
VL - 31
SP - 1911
EP - 1923
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
ER -