TY - JOUR
T1 - A Cost-Effective Interpolation for Multi-Magnification Super-Resolution
AU - Huang, Kuan Yu
AU - Pramanik, Suraj
AU - Chen, Pei Yin
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2022
Y1 - 2022
N2 - Super-Resolution (SR) was an important research topic, and SR methods based on Convolutional Neural Network (CNN) confirmed its groundbreaking performance. However, notably implementing the CNN model into resource-limited hardware devices is a great challenge. Therefore, we present a hardware-friendly and low-cost interpolation for Multi-Magnification SR image reconstruction. We follow our previous work, which is a learning-based interpolation (LCDI) with a self-defined classifier of image texture, and extend its original × 2 architecture to × 3 and × 4 architecture. Besides, the required pre-trained weights are reduced by the fusion scheme. Experimentally, the proposed method has only 75% lower pre-trained weights than LCDI. Compared to the related work OLM-SI (One linear learning mapping-SI), the run-time and quantity of pre-trained weights of the × 2 proposed method are at least 90% lower. Compared to CNN-based SR methods, the proposed method loses a little lower performance, but the evaluation of computational cost is much lower. In conclusion, the proposed method is cost-effective and a practical solution for resource-limited hardware and device.
AB - Super-Resolution (SR) was an important research topic, and SR methods based on Convolutional Neural Network (CNN) confirmed its groundbreaking performance. However, notably implementing the CNN model into resource-limited hardware devices is a great challenge. Therefore, we present a hardware-friendly and low-cost interpolation for Multi-Magnification SR image reconstruction. We follow our previous work, which is a learning-based interpolation (LCDI) with a self-defined classifier of image texture, and extend its original × 2 architecture to × 3 and × 4 architecture. Besides, the required pre-trained weights are reduced by the fusion scheme. Experimentally, the proposed method has only 75% lower pre-trained weights than LCDI. Compared to the related work OLM-SI (One linear learning mapping-SI), the run-time and quantity of pre-trained weights of the × 2 proposed method are at least 90% lower. Compared to CNN-based SR methods, the proposed method loses a little lower performance, but the evaluation of computational cost is much lower. In conclusion, the proposed method is cost-effective and a practical solution for resource-limited hardware and device.
UR - http://www.scopus.com/inward/record.url?scp=85139395908&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85139395908&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2022.3208708
DO - 10.1109/ACCESS.2022.3208708
M3 - Article
AN - SCOPUS:85139395908
SN - 2169-3536
VL - 10
SP - 102076
EP - 102086
JO - IEEE Access
JF - IEEE Access
ER -