Resolution practical in real time applications, due to

Resolution enhancement is one of the most swiftly
growing areas of research in the field of image processing.Super resolution algorithms plays an vital role in many
real-world problems in various fields like satellite and aerial imaging,
medical image processing,facial image analysis,sign and number plates reading,
and biometrics recognition etc. Improving the spatial resolution of
images is not practical in real time applications, due to the hardware
limitation of sensors and optical lenses.Super resolution algorithms are an   alternate
approach to get high spatial resolution images.

This paper gives a study of various super resolution
algorithms based on single image and multiple images.This paper summarises the numerous
approaches based on interpolation, frequency domain, regularization, and
learning-based approaches.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

 

I.INTRODUCTION

           In most digital imaging
applications, the need for high resolution images or videos are required
to  improve the pictorial information of
an image for human interpretation.Image resolution represents the details
contained in an image, the higher the resolution, the more image details. The
resolution of a digital image can be specified in different ways: pixel
resolution, spatial resolution, spectral resolution, temporal resolution, and
radiometric resolution. The spatial resolution of the image is firstly limited
by the imaging sensors or the imaging acquisition device.In general,modern
image sensor is a charge-coupled device (CCD) or a complementary
metal-oxide-semiconductor (CMOS) active-pixel sensor.These sensors are arranged
in a two-dimensional array to capture two-dimensional image signals.The spatial
resolution of the image is determined by the number of sensors used per unit
area.The higher density of the sensors, the higher spatial resolution possible
of the imaging system.An imaging system with inadequate detectors will generate
low resolution images with blocky effects, due to the aliasing from low spatial
sampling frequency.In order to increase the spatial resolution of an imaging
system,sensor size can be reduced to increase the sensor density.But this
results in shot noise which is generated by the decrease in the amount of light
incident on each sensor as the sensor size decreases. Also, the hardware cost
of sensor increases with the increase of image pixel density.Therefore,the
hardware limitation on the size of the sensor restrics the resolutionnof the
image.

          The image details in high frequency
bands are also limited by the optics such as lens blurs, lens aberration effects,
aperture diffractions and optical blurring due to motion.In most real time applications,constructing
imaging chips and optical components to get very high-resolution images is
expensive and impractical.So improving image resolution by refining the
hardware like increasing number of sensors and proper optic specification is expensive
and time consuming.Optimally balancing the trade-off among image resolution, Signal-to
Noise Ratio (SNR), and acquisition time is a critical challenge in real time
applications.An alternate approach to address this problem is accepting the
image degradations and signal processing algorithms can be used to post process
the captured images. These techniques are specifcally referred as
Super-Resolution (SR) reconstruction.Super-resolution (SR), an off-line
approach for improving image resolution, is free from these trade-offs.This has made super resolution algorithms play a vital
role in real time application and researchers developing a new super-resolution
algorithm for a specific applications.The next section of this paper reviews
the imaging models that have been used in most SR algorithms.The rest of the
paper  explains the frequency domain SR methods and the spatial
domain SR algorithms.

 

II. IMAGE OBSERVATION
MODEL

        Let X denotes the High
Resolution image desired, i.e., the digital image sampled above Nyquist
sampling rate from the bandlimited continuous scene, and Y k be
the k-th Low Resolution observation from the camera.  Assuming there are K Low Resolution
frames of X, where the Low Resolution observations are related with the High
Resolution scene X by

 

          Y k = DkHkFkX
+ Vk;   k = 1, 2…K;
              (1)

 

         where Fk models the warping function, Hk
models the blurring effects, Dk is the down-sampling
operator, and Vk is the noise term.These linear equations can
be rearranged into a matrix form which is given by,

 

          Y=AX+V                                                         (2)

Fig.1 shows the image observation
model for super resolution of image.The order of
applying warping and blurring functions specified in Eq.(1),can be reversed but
it may result in systematic errors if motion is being estimated from the LR
images.However,in few research papers it is mentioned that these two operations
can commute and be assumed as block-circulate matrices, if the point spread
function is space invariant, normalized, has nonnegative elements, and the
motion between the LR images is translational.

 

Fig.1.Super resolution image observation
model

III.SUPER RESOLUTION ALGORITHMS

            SR algorithms can be classified
based on the following parameters.(i).Domain employed (ii). the number of the
LR images involved (iii).reconstruction method.SR algorithms are classified
based on their domain such as  spatial
domain or the frequency domain.The majority of these algorithms have been
developed in the spatial domain, eventhough the SR algorithms actually emerged
from signal processing techniques in the frequency domain initially.SR
algorithms can be classified into two classes: single image or multiple image
based on the number of the LR images involved.The single-image based SR
algorithms mostly employ learning algorithms to improve the resolution by using
the relationship between LR and HR images from a training database.The
multiple-image based SR algorithms works on the assumption  that the targeted HR image and the LR
observations have some relative geometric and photometric displacements from
the targeted HR image.These algorithms makes use of these differences between
the LR observations to reconstruct the targeted HR image, and hence are
referred to as reconstruction-based SR algorithms.

 

IV.MULTI-IMAGE SUPER RESOLUTION

                 Multi-image SR methods can be
implemented in frequency domain and spatial domain. Fourier transform based
iterative methods as super resolution algorithms in the frequency domain were
first introduced by Gerchberg 1 and
then Santis and Gori 2.Later Tsai and Huang’s system
3was the firstmultiple-image SR algorithm in the frequency
domain. This algorithm was developed for working on LR images acquired by
Landsat 4satellite. This satellite produces a set of similar but translated
images of the same area of the earth,which is continuous scene.

Classification
of super-resolution methods.(i)frequency

domain
approaches (ii). Spatial domain approaches  
(iii). Regularization based approaches.

 

A.Frequency domain
approaches:

          Frequency domain methods are based on
three fundamental principles: i) the shifting property of the Fourier transform
(FT), ii) the aliasing relationship between the continuous Fourier transform
(CFT) and the discrete Fourier transform (DFT), iii) the original scene is
band-limited2.These properties allow the formulation of a system of equations
relating the aliased DFT coefficients of the observed images to samples of the
CFT of the unknown scene. These equations are solved yielding the frequency domain
coefficients of the original scene, which may then be recovered by inverse DFT.Consider
the continuous image and its CFT are
represented as by y(t1,t2) and Y(u,v) respectively.Global translations yield M shifted images, yr
(t1;
t2) = y (t1+?r1, t2+?r2), where r=1,2,….,M. The
shifting property of the CFT relates spatial domain translation to the
frequency domain as phaseshifting as,

 

                   
Yr (u,v) = e j2? (r1u+
r2v)  Y( u, v)                 (3)

 

The
continuous image yr (t1;
t2) is sampled with
sampling

periods   Tx
 and Ty
to obtain the discrete aliased image xr(m, n).Its M×M Discrete
Fourier Transform (DFT) is given by,
         

 

               xr(k,l)= e-2j(?/M)(mk+nl)           (4)

 

  where, k,l=0,1,2,….,M-1.

The CFT of the image and the DFT of the shifted and sampled images
are related via aliasing as discussed in 5.The DFT of LR image can be
represented as,

 

                xr(k,l)=?  Yr                      (5)                               

 

Where fsx =
1/Tx and fsy = 1/Ty are the sampling rates in x and y direction
respectively and ?=1/TxTy Assuming y(t1, t2) as band limited,
(4) can be rewritten in form of matrix as,

 

                            X=  ?Y                                        (6)

 

Here X is a column vector with rth element
of DFT coeffcients of  Xr(k,l)
, Y
is a column vector with CFT of the image y(t1, t2) and  ? is
a matrix that relates DFT of observed LR images (X) to CFT of high
resolution image (Y). Lately, through the advances in research Discrete Cosine
Transform (DCT) and Discrete Wavelet Transform (DWT) replaced   the
Fourier domain.Rhee and Kang 4 modified
the Fourier transform based approach to perform regularized deconvolution
techniques using DCT. The DWT is used to decompose
the input image into structurally   correlated sub-images which exploits the
self-similarities between local neighboring regions.

 

 B. Spatial domain approaches:

i.Interpolation:

 

Many spatial
domain approaches have been proposed by researchers  over the years to overcome the limitations of
the frequency domain methods.The basic concept of image interpolation algorithms
lies in producing a high-resolution image by upsampling the low-resolution
image.The interpolation algorithms often exploit this aliasing property and
perform dealiasing of the LR image during the upsampling process.The simplest
and non-iterative forward

 model for SR reconstruction in the spatial
domain which is analogous to the frequency domain approach.The interpolation
based SR approach involves three steps namely registeration,interpolation and
restoration. Fig.2 shows the procedure of such an approach.Assume Hk is
Linearly Spatial Invariant (LSI) and is the same for all K frames, and denoted
as H.Considering  simple motion
models such as translation and rotation then H and Fk commute67  and  the relation is given by,

           Y k =
DkHFkX + Vk = DkFkZ,   k =1,2….K               (7)

which formulates a
forward non-iterative approach based on interpolation and restoration.

 

Fig.2.Interpolation
steps

There are many
complex interpolation approches in the literatutre which includes Cubic
B-Spline,New Edge Directed Interpolation(NEDI-Covariance based),Gradient based
Adaptive Interpolation(GBAI),Auto Regressive Interpolation(ARI) and Edge Guided
Interpolation(EGI).

 

ii. Iterative Back Projection
approach:

Irani and Peleg 8 proposed an Iterative
Back Projection (IBP)algorithm,
where the high-resolution image is estimated by iteratively projecting the
difference between the observed low-resolution images and the simulated low
resolution images. Decimating the pixels of input LR
image the initial HR image can be generated.The initial HR image is degraded
and down sampled to generate the observed LR image.The HR image is estimated by
high pass filter for edge projection and back projecting the error(difference)
between simulated LR image and the observed LR image. This process is repeated
iteratively to minimize the energy of the error. This iterative process of SR
does iterations for some predefined iterations.Mathematically the SR steps
according to IBP are written as:

    
Y(n+1) = Y(n) +Ye+HPF(X(0))                                    (8)

 

Where Y(n+1) is estimated
HR image of n+1th iteration;  Y(n)
is estimated HR image of n-th  iteration;
Ye is error correction; HPF(X(0)) is the high frequency
data of the image obtained from the interpolation of initial LR image.In IBP
method, to generate the simulated LR image, the estimated HR image needs to be
down sampled. Due to down-sampling procedure, sampling frequency is decreased
that generates distortions in high frequency components and the aliasing
problem. Therefore, the HR image obtained from High Pass filter needs to be
further filtered by a Gaussian filter to eliminate the distortions from the
down sampling procedure.However,obtaining a unique solution is little difficult
in this method.

 

iii. Projection Onto Convex Sets (POCS):

 Patti and
Tekalp 9 proposed the POCS method. They developed   a
set-theoretic methodology to generate the high resolution image that is
consistent with the details   from the observed low-resolution images and
the prior image model. These information are associated with the constraint
sets in the solution space; the intersection of these sets represents the space
of permissible solution.

           The
important steps in POCS method: construction of convex set and obtaining   the
weight matrix W. Consider   the convex set Ci is the set in which all the
signals have the same propriety ?i.Convex set Ci   is
defined for each pixel (m1, m2) in each LR image Sk,
denoted as Ck (m1, m2). For   each iteration
the convex set would be adjusted adaptively. The quantity ?k (m1,m2) indicates
the  statistical measure  with which the region of high-resolution image
is a member of the set Ck.Weight matrix Wk (m1,m2:n1,n2) which has the Gaussian
distribution denotes the weight from pixel (m1,m2) in kth LR image to pixel (n1,
n2) in HR image.The estimation of Wk (m1,m2: n1, n2) could be refined by statistical
and learning approaches.POCS is a simple and effective solution to incorporate
constraints and priors which is impossible for stochastic approaches for super
resolution applications.But this method has the limitation of having unique
solution as the iteration depends on the initial value.

 

C. Regularisation
Based Approaches:

 

Regularization methods are
effective when number of LR

images are
limited or illconditioned blur operators.This approach applies either
deterministic or stochastic regularization strategy to consider the prior
knowledge of the unknown HR image. In terms of the Bayesian approach, the information about the high resolution
image which can be extracted from the low-resolution images is contained in the
probability distribution of the high resolution image.The information from the
observed low-resolution images and the prior knowledge of the high resolution
image can be exploited by applying Bayesian inference to estimate propability
distribution.

        Two
most widely used Bayesian-based approaches are maximum likelihood estimation (MLE)   approach
and maximum
a posterior (MAP) estimation approach. The stochastic MAP   approach is 
 most widely used as it has the
flexibility for the inclusion of priori information and constructing the
relation between LR and unknown HR image. The MLE approach is the variation of MAP
in which prior information of the HR image is not given.So MLE is uncommon
compared to MAP.Tom and Katsaggelos 10 explained the application of MLE for the SR of an image.The MAP
estimation of the HR image AMAP for which a posteriori probability P(A|B)
is a maximum.

     ÂMAP
  =
arg min P(A/B)                (9)

The equation
can be written   using Bayes theorem as follows,

     ÂMAP
  =
arg min P(B/A) +log P(B)    (10)

 

where
P(B|A) is the likelihood function and P(B) is a prior. The HR image is computed
by solving the optimization problem defined in Eq. (10).To construct the image prior
model various models are available in literature such as  TV norm , l1 norm  of horizontal and vertical
gradients, Simultaneous Autoregressive (SAR) norm , Gaussian MRF model, the
Huber MRF model and Conditional Random Field (CRF) model. Markov Random Field
(MRF) is commonly used as the prior model and the Probability Density Function
(PDF) of noise is calculated to determine the likelihood function.

 

V.SINGLE
FRAME SUPER RESOLUTION

 

Most
of the single image based SR algorithms are called leaning based SR algorithms.The
high frequency information lost during the image acquisition process is
extracted  by using suitable learning
mechanism from training set and  this
information is integrated with the input LR image to achieve a super-resolved
image. The performance of the learning-based SR methods highly depends upon the
training set data which are chosen in a way that they have high frequency
information and are similar to the input LR image. Fig.3 shows
the concept of the learning based SR algorithms.

       Basically the learning-based SR methods
include the following three stages: feature extraction, learning, and
reconstruction.Mjolsness, E 8 used neural
network to enhance the resolution of finger print images. The
power of neural networks lies in their ability to approximate any continuous
function.The most common learning models available in
the literature are Best Matching, Neighbor Embedding, and Sparse Representation
models.In recent years deep learning is playing a vital role in learning based
super resolution methods. Learning methods
employ Machine learning (ML) techniques to locally estimate the HR details of
the output image.These may be pixel-based, involving statistical learning or patch-based
involving dictionary based LR to HR correspondence of squared pixel blocks which
are also called example-based methods, exploit internal similarities within the
same image.

 

Fig.3 Learning  based
Super resolution model

 

C. Peyrard 9
compared the performance of multilayer perceptron and convolution neural
network for image super resolution.Even though the deep learnig is fastly
growing it has its constraints like  its inefficiency
of a network left to learn itself, architecture choice and conditioning and  non-informative gradients.

 

VI.CHALLENGES
FOR SUPER-RESOLUTION

In the previous
ssections various SR algorithms are discussed based on different domains and
number of images used.There are few challenges which prevents the SR algorithms
to work really well in real time application,sare listed below

i. Image
Registration: The performance of these SRmethods highly depends on the
estimation of registration parameters. So,a small sub-pixel error in the
registration may result in a different estimation. Image registration is an
important factor for the success of multiframe SR reconstruction, where
complementary spatial samplings of the HR image are fused. Traditional SR reconstructions
usually treat image registration as a distinct process from the HR image
estimation. The quality of the recovered HR image depends  on the image registration accuracy from the
previous step.

 

ii. Computation
Efficiency: The
intensive computation due to large number of unknowns, which require expensive
matrix manipulations, limits the practical application of SR algorithms. Farsiu 7
studied the application of Dk, Hk and Fk directly as the
corresponding image operations of downsampling, blurring and shifting,
bypassing the need to explicitly construct the matrices, brought significant
speed ups.

 

iii. Robustness
Aspects:  Traditional SR techniques are
vulnerable to the presence of outliers due to motion errors, inaccurate blur
models, noise, moving objects, motion blur.

 

VII.CONCLUSION

 

     In this
survey paper, understanding of   image super resolution with various exixting
super resolution algorithms are presented. In addition to the concepts provided,
the pros and cons of the algorithms are discussed. It highlights the various
algorithms based on the domain used and the number of images used.Sequence of LR images is used to extract a SR image in most of
the approaches. It is difficult to add prior with the IBP approach. The
projection onto convex set method uses the priori information but it does not
give unique solution and suffers from higher computational cost. Though the
iterative back-projection (IBP) method is simple, it is suffering from   ringing
effect, especially at edges.Regularized SR approach (MAP)   gives
unique solution and offers robustness and flexibility in modeling noise
characteristics. The ML estimator does not use priory term and thus performs
well.The learning based approach requires huge training data-set. Furthermore
in this approach, quality of SR image depends on the quality of the HR patch(s)
retrieved from the training data.Furthermore the challenges in reconstruction
of super resolution images are discussed.
In general SR algorithms are use depends on the applications with different
constraints. A SR algorithm which is suitable for satellite imaging may not
work well for medical image applications or facial image processing.This fact
attracts the researchers to come up with recent publications with the suitable algorithms
specific to the application.

Written by