This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
M-band wavelet consists of one scaling filters and M-1 wavelet filters. It can be considered as generalization of two-band DWT. M-band wavelets are having more flexible tiling of the time scale plane and zooming narrow band high frequency components in frequency responses. Double density Discrete Wavelet Transform (DDWT) is an example of M-band DWT, provides a compromise between the UDWT and the critically-sampled DWT. It consists of one scaling filters and two wavelet filters. In this chapter, the performance of DDWT and its complex valued extension, called as Dual Tree Double Density DTCWT (DD-CWT) are studied by fusing artificially generated two blurred images using different combination of fusion rules in three methods. And also the effect of window size for feature selection is also analyzed.
4.1 Double density Discrete Wavelet Transform
The sampling of the frequency and scaling plane provided by the critically sampled DWT is illustrated by the (idealized) diagram in the first panel of figure 4.1. The distance between adjacent points increases by a factor of two when moving from one scale to the next coarser scale. The corresponding diagram for the UDWT is shown in the second panel of figure 4.1. In this case the distance between points is constant regardless of scale. On the other hand, the diagram corresponding to the DDWT is shown in the third panel of the figure. For the DDWT, each scale is represented by twice as many points as in the critically sampled DWT and the octave spacing between points characteristic of the DWT is preserved. Figure 4.1 also indicates that both the DDWT and UDWT approximate the continuous wavelet transform more closely than the critically sampled DWT does. The number of points in the diagrams indicates the redundancy incurred by each of the transforms. UDWT is the most redundant, with a redundancy factor that depends on the number of scales over which the transform is computed. On the other hand, DDWT is redundant by a factor of two regardless of the number of scales used. An attractive feature of UDWT is that it is exactly shift invariant. Although that is not possible for DDWT, it turns out that it can be nearly shift invariant. Having a closer spacing between adjacent wavelets within the same scale makes DDWT less shift-sensitive than the critically sampled DWT while keeping the redundancy much lower than that of UDWT.
Figure 4.1: Frequency and Scaling plane of DWT, UDWT and DDWT
4.1.1. Filter Bank Structure of DDWT
Selecting an appropriate filter bank structure is the first step in the implementation of DDWT. The filter bank shown in Figure 4.2 exactly matches the strategy for sampling the frequency and scaling plane illustrated in the third panel of figure 4.1. This is similar to the usual two-channel filter bank used in implementation of critically sampled DWT. However, the down-sampler and up-sampler in the high-pass channel have been deleted. This is called an over-sampled filter bank because the total rate of the sub band signals c(n), d(n) is exceeds the input rate by 3/2.
Figure 4.2: Filter Bank structure of DDWT for which PR is impossible
DDWT is then implemented by recursively applying this filter bank on the low-pass sub band signal c(n). The prominent issue is the design of the filters h0(n) and h1(n) so that y(n) = x(n) (the condition for perfect reconstruction). Unfortunately, for the filter bank shown in figure 4.2, there are no finite length filters hi(n) satisfying this required property. Even if infinite length hi(n) realizable with finite order difference equations are allowed, there are still no solutions. So, to construct DDWT with FIR filters, the over sampled filter bank shown in figure 4.3 can be used.
Figure 4.3: Filter Bank structure of DDWT for which PR is impossible
The filter h0(n) will be a low-pass scaling filter whereas the filters h1(n) and h2(n) will both be high-pass wavelet filters. To develop the perfect reconstruction conditions,
For perfect reconstruction, , then it is necessary that,
The three-channel filter bank, which is used to implement the double density DWT, corresponds to a wavelet frame based on a single scaling function 'Î¦(t)' and two distinct wavelet functions namely 'Î¨1(t)' and 'Î¨2(t)'. Following the theory of dyadic wavelet bases, the scaling space Vj and the wavelet spaces Wi,j are defined as
It follows that Î¦(x), Î¨1(x) and Î¨2(x) must satisfy the dilation and wavelet equations
The scaling function 'Î¦(t)' and two wavelet functions 'Î¨1(t)' and 'Î¨2(t)' are defined in the above equations by the low pass scaling filter 'h0(n)' and high pass wavelet filters 'h1(n)' and 'h2(n)'. Then for any square integrable signal f(t) is given by
The two wavelets 'Î¨1(t)' and 'Î¨2(t)' are designed to be offset from one another by one half and related together as the integer translates of one wavelet fall midway between the integer translates of the other wavelet as shown in the following equation.
By having more wavelets that gives a closer spacing between adjacent wavelets within the same scale, the DDWT approximates CoWT.
4.1.2 2-D Extension of DDWT
To use the double-density discrete wavelet transform for 2-D signal processing, it is necessary to implement a two-dimensional DDWT. This can be simply done by alternately applying the transform first to the rows, then to the columns of an image, as usually done for 2-D DWT. This is shown in figure 4.4, in which 1-D over sampled filter bank is iterated on the rows first and then on the columns. This gives rise to nine 2-D sub bands, one of which is 2-D low pass scaling filter, and the other eight of which make up the eight 2-D wavelet filters.
Figure 4.4: 2D Double Density Discrete Wavelet Transform
To indicate the filters used along the row and column directions to create the nine sub bands, the sub bands are labeled as LL, LH1,LH2,H1L,H1H1,H1H2,H2L,H2H1,H2H2. Since L is a low pass filter and H1 & H2 are high pass filters, the H1H1, H1H2, H2H1 and H2H2 sub band have the frequency domain support comparable to the HH sub band of critically sampled DWT.
While 1-D DDWT is redundant by a factor of 2, the corresponding 2-D extension is redundant by a factor of 8/3. In general, the redundancy rate is (3d-1)/(2d-1) for the extension of d-dimensional signals. If d increases, this redundancy rate approaches (3/2)d, the over sampling rate of the filter bank building block. This redundancy rate is higher than the redundancy of 2-D Laplacian pyramid, but lower than 2-D extension of the dual-tree DWT. The 2-D extension of the dual-tree DWT has a redundancy rate of 4. In general, the d-dimensional dual-tree has a redundancy of 2d.
4.2 Double Density Dual Tree DWT
A double-density Dual Tree DWT is an over-complete DWT designed to possess the good properties of the DDWT and the Dual Tree DWT simultaneously. A three channel filter bank structure is used to implement DDWT whereas DTCWT is based on the concatenation of two critically sampled DWTs. The filter bank structure corresponding to Dual Tree DWT simply consists of two critically sampled iterated filter banks operating in parallel. The differences between the DDWT and the Dual Tree DWT can be given as follows,
In Dual Tree DWT, the two wavelets form an approximate Hilbert transform pair, whereas in DDWT, the two wavelets are offset by one half.
Since achieving the Hilbert pair property adds additional constraints, there are only fewer degrees of freedom in the design of Dual Tree DWT whereas there are more degrees of freedom for design of DDWT.
Two Channel filter bank structures are used to implement Dual Tree DWT and whereas three channel filter bank structures are used to implement DDWT.
The Dual Tree DWT can be interpreted as a complex valued wavelet transform, whereas DDWT cannot be interpreted as a complex valued wavelet transform.
Figure 4.5: Double Density Dual Tree Discrete Wavelet Transform
The double-density Dual Tree DWT which is designed to simultaneously possess the properties of the double-density DWT and the Dual Tree DWT, is based on two distinct scaling functions Î¦h(t), Î¦g(t) and four distinct wavelets Î¨h,i(t), Î¨g,i(t), for i=1,2. The two wavelets Î¨h,i(t) are offset from other two wavelets, Î¨g,i(t) by one half and is given by
where the two wavelets Î¨g,1(t) and Î¨g,2(t) form an approximate Hilbert transform pair of Î¨h,1(t) and Î¨h,2(t) and given by
The filter bank structure of double-density Dual Tree DWT is shown in the figure 4.5. The filters in the first filter bank is denoted by hi(n)and the filters in the second filter bank by gi(n) for i=1,2. The synthesis filters in each filter bank is the time reversed version of analysis filter. The filters hi(n) and gi(n) should satisfy perfect reconstruction property. From the basics of multi rate identities, the condition for perfect reconstruction is given by
The scaling and wavelet functions are defined by the dilation and wavelet equations as follows,
The Fourier transforms of the scaling functions and wavelet functions are denoted as,
If Î¨g,i and Î¨h,i forms the Hilbert transform pairs, then
From the infinite product formula, the equation relating Î¦g(Ï‰) and Î¦h(Ï‰) can be derived as,
Similarly, the equation relating Î¨g,i(Ï‰) and Î¨h,i(Ï‰) can be derived as,
for i=1,2. For Hilbert transform pairs, Î¸i(Ï‰) must satisfy,
Then the periodic function Î¸i(Ï‰) for double-density Dual Tree DWT are defined as,
4.2.1. 2-D Extension of double density dual tree DWT
To use the double-density dual tree discrete wavelet transform for 2-D image processing, it is necessary to implement a two-dimensional double-density dual tree discrete wavelet transform. This can be simply done by alternately applying the transform first to the rows, then to the columns of an image, as usually done for 2-D DTCWT. This is shown in figure 4.6, in which 1-D over sampled filter bank of Double Density Dual Tree Discrete Wavelet Transform is iterated on the rows first and then on the columns. This gives rise to eighteen 2-D sub bands, two of which are 2-D low pass scaling filter, and the other sixteen are wavelet filters. There are two versions of the double-density dual-tree DWT, they are named as double-density dual-tree real-oriented DWT and double-density dual-tree complex-oriented DWT.
Figure 4.6: Analysis Filter Bank of Double Density Dual Tree real-oriented Wavelet Transform
The 2-D double-density dual-tree real-oriented DWT of an image is implemented by using two over sampled 2-D DDWTs in parallel. The sixteen wavelets associated with the real 2-D double-density dual-tree DWT are illustrated in the following figure.
Figure 4.7: Double Density Dual Tree real-oriented Wavelets
The 2-D double density dual tree complex oriented DWT (DDT-CWT) is 4-times expansive, which means it gives rise to twice as many wavelets in the same dominating orientations as the 2-D double-density dual-tree real oriented DWT. For each of the directions, one of the wavelets can be interpreted as the real part of a complex-valued 2-D wavelet function, while the other can be interpreted as the imaginary part. This transform is implemented by applying four 2-D double-density DWTs in parallel to the same input data with distinct filter sets for the rows and columns. As in the case of DTCWT, the sum and difference of the sub band images are taken. This operation yields the 32 oriented wavelets associated with the 2-D double-density dual-tree complex DWT. The wavelets are oriented in the same directions as those of the real DWT version, but two in each direction. The thirty two wavelets associated with the complex 2-D double-density dual-tree DWT are illustrated in the following figure 4.8. It can be interpreted the first row of wavelets at the real part of a set of sixteen complex wavelets and the second row as the imaginary part. Then, the third row represents the magnitude of the sixteen complex wavelets.
Figure 4.8: Double Density Dual Tree Complex Wavelets
4.3 DDWT and DDT-CWT based Image Fusion
Figure 4.9: Schematic diagram of DDWT/DDT-CWT based Image Fusion
Figure 4.9 describes the structure of proposed DDWT and DDT-CWT based image fusion of artificially generated two blurred images using different combination of fusion rules. First, the DDWT / DDT-CWT of the two source images are computed. In each sub-band, individual pixels of the two images are compared based on the fusion rule that serves as a measure of activity at that particular scale and space. A fused wavelet transform is created by taking pixels from that wavelet transform that shows greater activity at the level. The inverse wavelet transform is the fused image with clear focus on the whole image.
Method 1: This method uses simple absolute maximum fusion rule for all sub bands. Since larger absolute transform coefficients corresponds to sharper brightness changes and thus become the salient features in the images, a good integration rule is to select the coefficients whose absolute values are higher at every point in the transform domain. In this way, the fusion takes place in all the resolution levels and the more dominant features at each scale are preserved in the new multiresolution representation. However, since the usual features in the images usually are larger than one pixel, the pixel-by-pixel maximum selection rule may not be the most appropriate method. So, an area-based selection rule is used in the method. The images are first decomposed into a gradient pyramid. The maximum absolute value of the coefficients over 3 x 3 or 5 x 5 window of each image patch is computed as an activity measure associated the pixel centered in the window. The coefficient from the source image, whose activity measure is larger, is chosen to form the fused wavelet coefficient at the corresponding locations. In this way, a high activity value indicates the presence of a dominant feature in the local area. A binary decision map of same size as the wavelet sub band is then created to record the selection results. This binary map is subject to consistency verification. Specifically in the transform domain, if the center pixel value comes from image A while the majority of the surrounding pixel value come from image B, the center pixel value is switched to that of image B. In the implementation, a majority filter is applied to the binary decision map. A fused image is finally obtained based on the new binary decision map. This selection scheme helps to ensure that most of the dominant features are incorporated into the fused image.
Method 2: This method uses the absolute maximum fusion rule for low frequency sub bands and uses gradient based energy fusion rule for high frequency bands. The objective any image fusion algorithms is to identify, compare and then transfer the most important visual information from input source images into a output fused image. Visual information extracted by an observer from a visual stimulus is conveyed by uncertainties such as changes in the observed image rather than absolute pixel values. The most important changes pose a certain degree of spatial structure and are perceived by observers as gradients and edges in images. In the same manner, larger and more meaningful image structures such as patterns, features and objects can be considered as specific spatial arrangements of basic gradient elements of different scales and orientations. So, the objective of any image fusion algorithm is to transfer all the most important gradient information from the registered source input images into the fused image without any loss. And also, after taking DDWT and DDT-CWT transforms, the transformed image contains low frequency sub bands as well as high frequency sub bands. The low frequency sub bands contain the average information and the high frequency sub bands contain directional and edge information. So, a good image fusion algorithm should deal the low frequency and high frequency sub bands separately. Since low frequency approximation sub bands contain the average image information, larger absolute transform coefficients of these sub bands correspond to sharper brightness changes, a good integration rule is to use maximum absolute value of the coefficients over 3 x 3 or 5 x 5 window of each image patch as an activity measure. The coefficient from the source image, whose activity measure is larger, is chosen to form the fused wavelet coefficient of the low frequency sub band at the corresponding locations. To transfer the directional and edge information present in the high frequency sub bands to the fused image, an edge based information measure can be used as the selection criteria. So, the gradient based energy rule is applied to select the coefficients to form the high frequency fused wavelet coefficients.
Method3: This method uses the absolute maximum fusion rule for low frequency sub bands whereas the salience match measure based fusion rule is applied to the high frequency sub bands. Since low frequency approximation sub bands contain the average image information, larger absolute transform coefficients of these sub bands correspond to sharper brightness changes. So, maximum absolute value of the coefficients over 3 x 3 or 5 x 5 window of each image patch is used as an activity measure. The coefficient from the source image, whose activity measure is larger, is chosen to form the fused wavelet coefficient of the low frequency sub band at the corresponding locations. For each high frequency sub band pair, the salience of a feature is computed as a local energy in the neighborhood of a coefficient.
where w(q) is a weight and w(q)=1. At a given resolution level j, this fusion scheme uses two distinct modes of combination namely Selection and Averaging. In order to determine whether the selection or averaging will be used, the match measure M(p) is calculated as
If M(p) is smaller than a threshold T, then the coefficient with the largest local energy is placed in the composite transform while the coefficient with less local energy is discarded. The selection mode is implemented as
Else if M(p) T, then in the averaging mode, the combined transform coefficient is implemented as
where . A binary decision map of same size as the wavelet sub band is then created to record the selection results. This binary map is subject to consistency verification. Specifically in the transform domain, if the center pixel value comes from image A while the majority of the surrounding pixel value come from image B, the center pixel value is switched to that of image B. In the implementation, a majority filter is applied to the binary decision map. A fused image is finally obtained based on the new binary decision map. This selection scheme helps to ensure that most of the dominant features are incorporated into the fused image.