This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
With the development of new imaging sensors arises the need of a meaningful combination of all employed imaging sources. The actual fusion process can take place at different levels of information representation; a generic categorization is to consider the different levels as, sorted in ascending order of abstraction: signal, pixel, feature and symbolic level. This site focuses on the so-called pixel level fusion process, where a composite image has to be built of several input images. To date, the result of pixel level image fusion is considered primarily to be presented to the human observer, especially in image sequence fusion (where the input data consists of image sequences). A possible application is the fusion of forward looking infrared (FLIR) and low light visible images (LLTV) obtained by an airborne sensor platform to aid a pilot navigates in poor weather conditions or darkness. In pixel-level image fusion, some generic requirements can be imposed on the fusion result. The fusion process should preserve all relevant information of the input imagery in the composite image (pattern conservation) The fusion scheme should not introduce any artifacts or inconsistencies which would distract the human observer or following processing stages .The fusion process should be shift and rotational invariant, i.e. the fusion result should not depend on the location or orientation of an object the input imagery .In case of image sequence fusion arises the additional problem of temporal stability and consistency of the fused image sequence. The human visual system is primarily sensitive to moving light stimuli, so moving artifacts or time depended contrast changes introduced by the fusion process are highly distracting to the human observer. So, in case of image sequence fusion the two additional requirements apply. Temporal stability: The fused image sequence should be temporal stable, i.e. gray level changes in the fused sequence must only be caused by gray level changes in the input sequences, they must not be introduced by the fusion scheme itself;
2. EXISTING METHOD
2.1 WAVELET TRANSFORM
Wavelets are mathematical functions defined over a finite interval and having an average valueÂ of zero that transform data into different frequency components, representing each component with a resolution matched to its scale.
All wavelet functions, w(2kt - m), are derived from a single mother wavelet, w(t). This wavelet is a small wave or pulse like the one shown in Fig. 2.1.
Fig. 2.1 Mother wavelet w (t)
The decomposition process can be iterated, with successive approximations being decomposed in turn, so that one signal is broken down into many lower resolution components. This is called the wavelet decomposition tree and is depicted as in Fig. 2.2
Fig. 2.2 Multilevel decomposition
The final configuration contains a small low-resolution sub band. In addition to the various transform levels, the phrase level 0 is used to refer to the original image data. When the user requests zero levels of transform, the original image data (level 0) is treated as a low-pass band and processing follows its natural flow.
Low Resolution Sub band
Fig. 2.3 Image Decomposition Using Wavelets
2.1.2 WAVELET RECONSTRUCTION:
The reconstruction of the image is achieved by the inverse discrete wavelet transform (IDWT). The values are first up sampled and then passed to the filters. This is represented as shown in Fig. 2.4.
Fig. 2.4 Wavelet Reconstruction
The wavelet analysis involves filtering and down sampling, whereas the wavelet reconstruction process consists of up sampling and filtering. Up sampling is the process of lengthening a signal component by inserting zeros between samples as shown in Fig. 2.5.
Fig. 2.5 Reconstruction using up sampling
It is possible to reconstruct the original signal from the coefficients of the approximations and details. The process yields a reconstructed approximation which has the same length as the original signal and which is a real approximation of it.
The reconstructed details and approximations are true constituents of the original signal. Since details and approximations are produced by down sampling and are only half the length of the original signal they cannot be directly combined to reproduce the signal. It is necessary to reconstruct the approximations and details before combining them. The reconstructed signal is schematically represented as in Fig. 2.6.
Fig. 2.6 Reconstructed signal components
Fig. 2.7 FUSION RULES
Fig. 2.8 BLOCK DIAGRAM OF DWT
When constructing each wavelet coefficient for the fused image. We will have to determine which source image describes this coefficient better. This information will be kept in the fusion decision map. The fusion decision map has the same size as the original image. Each value is the index of the source image which may be more informative on the corresponding wavelet coefficient. Thus, we will actually make decision on each coefficient. There are two frequently used methods in the previous research. In order to make the decision on one of the coefficients of the fused image, one way is to consider the corresponding coefficients in the source images as illustrated by the red pixels. This is called pixel-based fusion rule. The other way is to consider not only the corresponding coefficients, but also their close neighbors, say a 3x3 or 5x5 windows, as illustrated by the blue and shadowing pixels. This is called window-based fusion rules. This method considered the fact that there usually has high correlation among neighboring pixels.
In our research, we think objects carry the information of interest, each pixel or small neighboring pixels are just one part of an object. The fusion rule also is schematically represented as in Fig. 2.7.The transformation of signal is schematically represented as in Fig. 2.8. Thus, we proposed a region-based fusion scheme. When make the decision on each coefficient, we consider not only the corresponding coefficients and their closing neighborhood, but also the regions the coefficients are in. We think the regions represent the objects of interest.
3. Proposed method:
3.1 NEURO-FUZZY LOGIC
Neural Network and Fuzzy Logic approach can be used for sensor fusion. Such a sensor fusion could belong to a class of sensor fusion in which case the features could be input and decision could be output. The help of Neuro-fuzzy of fuzzy systems can achieve sensor fusion. The system can be trained from the input data obtained from the sensors. The basic concept is to associate the given sensory inputs with some decision outputs. After developing the system. Another group of input data is used to evaluate the performance of the system.
Following algorithm and .M file for pixel level image fusion using Fuzzy Logic illustrate the process of defining membership functions and rules for the image fusion process using FIS (Fuzzy Inference System) editor of Fuzzy Logic toolbox in Matlab
3.2 ALGORITHM USING NEURO FUZZY
Read first image in variable M1 and find its size (rows z l , columns: SI).
Read second image in variable M2 and find its size (rows z2, columns: s2).
Variables MI and M2 are images in matrix form where each pixel value is in the range from 0-255. Use Gray color map.
Compare rows and columns of both input images. If the two images are not of the same size, select the portion, which are of same size.
Apply wavelet decomposition and form spatial decomposition Trees
Convert the images in column form which has C= zl*sl entries.
Form a training data, which is a matrix with three columns and entries in each column are form 0 to 255 in steps of 1.
Form a check data which is a matrix of pixels of two input images in a column format
Decide the number and type of Membership Function.
Create fuzzy interference system of type Mamdani with following specifications
Fig.3.2 Fuzzy Editor
Decide number and type of membership functions for both the input images by tuning the membership functions.
Input images in antecedent are resolved to a degree of membership ranging 0 to 255.
Make rules for input images, which resolve the two antecedents to a single number from 0 to 255.
Fig 3.3 Member ship function Editor
For num=l to C in steps of one, apply fuzzification using the rules developed above on the corresponding pixel values of the input images which gives a fuzzy set represented by a membership function and results in output image in column format.
Fig 3.4Rules Editor
Start training using ANFIS for the generated Fuzzy Interference system using Training data
Apply Fuzification using Trained Data and Check Data
Convert the column form to matrix form and display the fused image.
In this project, the fusion of images taken by digital camera was studied. The pixel-level-based fusion mechanism applied to sets of images. All the results obtained by these methods are valid in case of using aligned source images from the same scene.
In order to evaluate the results and compare these methods two quantitative assessment criteria Information Entropy and Root Mean Square Error were employed. Experimental results indicated that there are no considerable differences between these two methods in performance. In fact if the result of fusion in each level of decomposition is separately evaluated visually and quantitatively in terms of entropy Although some differences identified in lower levels, DWT and LPT demonstrated similar results from level three of decomposion However the RMSE results compared to quality and entropy of fused images indicate that RMSE can not be used as a proper criterion to evaluate and compare the fusion results. Finally the experiments showed that the LPT approach is implemented faster than DWT. Actually LPT takes less than half the time in comparison with DWT and with regard to approximately similar performance, LPT is preferred in real-time applications.
Fuzzy and Neuro-Fuzzy algorithms have been implemented to fuse a variety of images. The results of fusion process proposed are given in terms of Entropy and Variance. The fusions have been implemented for medical images and remote sensing images. It is hoped that the techniques can be extended for colored images and for fusion of multiple sensor images.