This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
The concept of Image fusion has been used in wide variety of applications like medicine, remote sensing, machine vision, automatic change detection, biometrics etc. With the emergence of various image-capturing devices, it is not possible to obtain an image with all the information. Sometimes, a complete picture may not be always feasible since optical lenses of imaging sensor especially with long focal lengths, only have a limited depth of field. Image fusion helps to obtain an image with all the information. Image fusion is a concept of combining multiple images into composite products, through which more information than that of individual input images can be revealed. The goal of image fusion is to integrate complementary multi sensor, multi temporal and/or multi view data into a new image containing more information. With the availability of multiple image sources, image fusion has emerged as new and promising research area. Many algorithms and image fusion softwares are developed in recent years using various approaches for various applications. The actual fusion process can take place at different levels of information representation. A generic categorization is to consider the different levels as signal, pixel, feature and symbolic level. The lowest possible technique in image fusion is the pixel level called as nonlinear method, in which the intensity values of pixels of the source images are used for merging the images. The next level is the feature level, which operates on characteristics such as size, shape, edge etc. The next highest level called decision level fusion deals with symbolic representations of the images.
Currently, most of the image fusion has been performed using pixel based method. A new multi-focus image fusion algorithm, which is on the basis of the Ratio of Blurred and Original Image Intensities. A multi-focus image fusion method uses spatial frequency (SF) and morphological operator. The advantage of the pixel level image fusion is that images retain original information. Image fusion methods based on wavelet transform, wavelet contrast, contour let transform and region statistics, a multi resolution signal decomposition have been widely used in recent years. A new method is developed to merge two spatially registered images with different focus based on multi-resolution wavelet decomposition and evolutionary strategy. This paper discusses the fusion of multi focus images using the information level or activity level in the region of images. Section 2 gives a review of the spatial frequency and visibility. The scheme for fusing images and the experimental results are discussed in section 3. At the end of the section 3 performance assessment is discussed.
QuickBird is a high-resolution commercial earth observation satellite. The Quickbird Multispectral and panchromatic images covers the portion of Denver in Colorado of USA. The area is located between latitude 40°0'22''N to 40°1'26''N and longitude 105°17'18''W to 105 °18'40''W. This satellite is an excellent source of environmental data useful for analyses of changes in land usage, agricultural and forest climates. Quick Bird's imaging capabilities can be applied to a host of industries, including Oil and Gas Exploration & Production (E&P), Engineering and Construction and environmental studies. Orbit Altitude is of 450 Km and orbit Inclination is of 97.2Â°, sun-synchronous. The satellite collects panchromatic (black & white) imagery at 60-70 centimeter resolution and multispectral imagery at 2.4- and 2.8-meter resolutions.
The Wavelet Method
Wavelet techniques are increasingly being used for the processing of images. The algorithm used in this study is based on Multiresolution wavelet decomposition. The image is decomposed into multiple channels based on their local frequency content, and then new images with different degree of resolution are obtained. In this image fusion scheme, we perform a multiscale transform fusion process to construct midterm bands which have the same size of PAN. The image fusion method used in this scheme consists of decomposing panchromatic image and each band of the resampled multispectral image to a chosen wavelet level. By replacing the low frequency of the panchromatic image with the low frequency of the resampled multispectral image, the fused multiscale representations are obtained. The midterm bands whose sizes are same as the PAN image are produced when the inverse wavelet transform is performed on the multiscale representations. Thus, the wavelet transform of a distribution f(t) can be expressed as
where a and b are scaling and translational parameters, respectively. Each base function is a scaled and translated version of a function called mother wavelet. These base functions =0.The image is decomposed into wavelet planes by constructing the sequence of approximations using. F1 (P) = P1, F2 (P1) = P2 , F3 (P2) = P3 , ..., Where P represents image. In order to construct this sequence, successive convolutions are performed with a filter obtained from an auxiliary function or scaling function. The use of this function leads to a convolution with a 5x5 mask.
The wavelet planes are computed as the differences between two consecutive approximations Pl-1 y P1.
Let Wl = Pl-1 - Pl (l = 1,...,n ), in which P0 = P , the reconstruction expression can be written as
Wavelet based image merging may be performed in two ways: 1) Replacing some wavelet coefficients of the multispectral image by the corresponding coefficients of the high-resolution image; and 2) by adding high resolution coefficients to the multispectral data. The second option preserves all of the spatial information in the multispectral image, and it allows again two possibilities to extract the high resolution information: adding to the RGB, to decompose it into wavelet planes construct the sequence of approximations components and adding it to the intensity component. The last approach is used because it is more straightforward and because it would basically affect to the spatial information, but not to the spectral. The Multispectral and panchromatic high resolution image is decomposed into n wavelet planes, then three of the bands(Vertical, Horizontal, Diagonal) assigned to the RGB components are transformed into the IHS representation, which are added to the intensity component of the multispectral image. The IHS transform to merge images is based on its ability to separate the spectral information of an RGB composition in its two components H(Hue) and S(Saturation), while isolating most of the spatial information in the I(Intensity) component. The HIS transform is always applied to an RGB composite. This implies that the fusion will be applied to groups of three bands of the MS image. As a result of this transformation, we obtain the new intensity, hue, and saturation components. The PAN image then replaces the intensity image. Before doing this, and in order to minimize the modification of the spectral information of the fused MS image with respect to the original MS image, the histogram of the PAN image is matched with that of the intensity image. Applying the inverse wavelet transform, we obtain the fused RGB image, with the spatial detail of the PAN image incorporated into it.
Fig.1 Methodology flow chart Processing steps of wavelet based image fusion
Decompose a high resolution P image into a set of low resolution P images with wavelet coefficients for each level.
Replace a low resolution P images with a MS band at the same spatial resolution level in the HIS transform.
Perform a reverse wavelet transform to convert the decomposed and replaced P set back to the original Pre solution level. For the processing the replacement and reverse transform does three times, each for one spectral band.
The Mean Square Error (MSE) and the Peak Signal to Noise Ratio (PSNR) are the two error metrics used to compare image compression quality. The MSE represents the cumulative squared error between the compressed and the original image, the lower the value of MSE, the lower the error.
M and N are the number of rows and columns in the input images,The PSNR computes the peak signal-to-noise ratio, in decibels, between two images. This ratio is often used as a quality measurement between the original and a compressed image. The higher the PSNR, the better the quality of the compressed, or reconstructed image.The PSNR is computed using the following equation
Where R is the maximum fluctuation in the input image data type.
Results And Discussion
To evaluate the performance and efficiency of the proposed method, for the experiment on the fusion of Quickbird images, the original PAN and MS images are first atmospherically corrected and then spatially degraded to a resolution of 4 and 16 meter, respectively. The selected bands of QuickBird multispectral image for fusion with QuickBird PAN image are bands 4, 2, 1 (i.e. Near infrared band, Green band, Blue band). In order to display the fused images, a synthetic color is used which substitutes the grey of near infrared band for the R component of RGB color space. From the fused images using different fusion methods, the fusion results of the proposed method which reproduces the spectral characteristics of original multispectral image and the spatial information of panchromatic image very well is the best comparing with PCA, Brovey method. By the comparison of spatial effects, the results of all methods display the same details.
Fig.2 (A) Multispectral image (B) Panchromatic image (C) Fused image
The image fusion is presented which is based on the wavelet decomposition technique and the IHS transform technique. The test result is shown that the proposed approach can preserve spectral content of multispectral image and spatial details of PAN image very well, and the color is smoothly integrated into the spatial features. The 8.51% of PSNR value and 35.92 of Mean Square Error performance values are reduced by the proposed fusion technique. The future research can be carried out by applying the wavelet decomposition technique for hyper spectral image.