Coupled adversarial learning for single image super-resolution

Chih Chung Hsu, Kuan Yu Huang

研究成果: Conference contribution

1 引文 斯高帕斯(Scopus)

摘要

Generative adversarial nets (GAN) have been widely used in several image restoration tasks such as image denoise, enhancement, and super-resolution. The objective functions of an image super-resolution problem based on GANs usually are reconstruction error, semantic feature distance, and GAN loss. In general, semantic feature distance was used to measure the feature similarity between the super-resolved and ground-truth images, to ensure they have similar feature representations. However, the feature is usually extracted by the pre-trained model, in which the feature representation is not designed for distinguishing the extracted features from low-resolution and high-resolution images. In this study, a coupled adversarial net (CAN) based on Siamese Network Structure is proposed, to improve the effectiveness of the feature extraction. In the proposed CAN, we offer GAN loss and semantic feature distances simultaneously, reducing the training complexity as well as improving the performance. Extensive experiments conducted that the proposed CAN is effective and efficient, compared to state-of-the-art image super-resolution schemes.

原文English
主出版物標題2020 IEEE 11th Sensor Array and Multichannel Signal Processing Workshop, SAM 2020
發行者IEEE Computer Society
ISBN(電子)9781728119465
DOIs
出版狀態Published - 2020 6月
事件11th IEEE Sensor Array and Multichannel Signal Processing Workshop, SAM 2020 - Hangzhou, China
持續時間: 2020 6月 82020 6月 11

出版系列

名字Proceedings of the IEEE Sensor Array and Multichannel Signal Processing Workshop
2020-June
ISSN(電子)2151-870X

Conference

Conference11th IEEE Sensor Array and Multichannel Signal Processing Workshop, SAM 2020
國家/地區China
城市Hangzhou
期間20-06-0820-06-11

All Science Journal Classification (ASJC) codes

  • 訊號處理
  • 控制與系統工程
  • 電氣與電子工程

指紋

深入研究「Coupled adversarial learning for single image super-resolution」主題。共同形成了獨特的指紋。

引用此