A Study on Super Resolution

  • 郭 芫宏

Student thesis: Doctoral Thesis


In recent years applications of image super-resolution (SR) technologies have been widely used in daily lives With the resolution improvement of digital displays nowadays from 1920 X 1080 (1080p full HD) to 3840 X 2160 (2160p ultra HD) even more this issue becomes more and more interesting and important The main purpose of super resolution is to obtain high-resolution (HR) images from low-resolution (LR) ones and moreover makes the former look like they have been acquired with a sensor having the expected high resolution or at least as "natural" as possible In general SR algorithms can be roughly classified into 4 categories that respectively are interpolation-based reconstruction-based wavelet-based and learning-based algorithms There are usually respective advantages and drawbacks in each category and thus some combinations of these are further proposed in recent researches to achieve better performance In this thesis an SR algorithm combining advantages of reconstruction-based and wavelet-based categories is first proposed in which an iterative refinement process is performed on wavelet coefficients in high-frequency subbands (LH HL and HH) of a wavelet transformed image rather than directly on the original pixels in the spatial domain Besides to further meet the hardware requirement a modified version which is adaptive to a specified buffer size is also presented In the experimental results it is obvious that the proposed algorithms provide better performance Reconstruction-based SR algorithms usually perform well on detail exhibition and a popular concept used in this category is to iteratively refine a preliminary HR image using a certain predefined constrain to obtain a final HR one In other words it can be taken as properly adding back details to a preliminary HR image to obtain a final HR one In such a category iterative refining processing usually induces great computing complexity and sometimes fixed values of parameters do not sufficiently and adaptively refine details according to characteristics of various images In this thesis linear regression which is a useful tool in statistics and has been recently used in super resolution due to its ability of estimation is properly used in conjunction with the self-similarity of a pair of LR and HR images to solve the aforementioned problems Different from other SR algorithms where regression is used to estimate interpolated pixels the proposed algorithms take the advantages of regression in conjunction with that of reconstruction-based SR algorithms to construct detailed and natural HR images An HR image obtained by fast interpolation is usually more blurred and this can be taken as the fact that some details are lost in the enlarging process How to estimate these lost details is the most concerned issue in the proposed algorithms An efficient SR algorithm is proposed where simple linear regression models are established with details acquired from patches of LR images and then are used to estimate details of HR images due to the self-similarity of a pair of LR and HR images The experimental results show that the proposed SR algorithm is not only effective but also efficient Another SR algorithm assuming that various details can be taken as a combination of several oriented ones is further proposed Eight filters are designed to properly acquire corresponding oriented details Multiple linear regression models are established with oriented details acquired from LR images by the designed filters and then used to estimate details of HR images with the corresponding oriented details acquired from corresponding preliminary HR images by the same filters For more adaptively utilizing the characteristics of different regions in an image a modified version that preliminarily segments the input LR image is also presented From experimental results it is clear that the proposed algorithms perform well in both objective and subjective measurements The greatest advantage of learning-based SR algorithm is the ability of constructing "natural" details and vector quantization (VQ) is a popular technique used in this category In this thesis this advantage is also taken into consideration and an SR algorithm using weighted VQ is proposed Different from other SR algorithms utilizing VQ to construct HR images the concept of finding code vectors that are the most similar to the input patch by correlation coefficients instead of Euclidean distance is proposed since the added-back details are obtained by using multiple linear regression to combine the code vectors selected from codebooks rather than by using code vectors directly selected from codebooks In this manner the influence of the performance of VQ due to the limit of codebook sizes can be decreased That is the cost for saving codebooks can be decreased and meanwhile the performance is still maintained and is even better In the experimental results it is clear that the goal of the proposed algorithm is achieved and the actual experimental results are also as expected An SR algorithm that can present more details and meanwhile avoid inducing obvious jaggy artifacts is also exhibited in this thesis A fast fractal super resolution technique is adopted to obtain the preliminary HR image due to its ability of constructing clear details Then post-processing is proposed to decrease the obvious jaggy artifacts along slanted edges by directional blurring and enhancing using the pre-designed oriented filters and patterns Actually the post-processing can be used to effectively decrease the jaggy artifacts caused by SR algorithms In the experimental results it can be found that the details of HR images are exhibited clearly and the artifacts along strong slanted edges are greatly decreased
Date of Award2014 May 19
Original languageEnglish
SupervisorShen-Chuan Tai (Supervisor)

Cite this