This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Abstract The advent of technology has resulted in us extracting large amounts of data from an image. Image fusion is widely used in all fields, particularly in medical applications. Image fusion is a technology that is used to combine the information of multiple images into single image, and the resulting fused image is more accurate, more comprehensive and reliable. In this paper, image fusion based on wavelet transform and curvelet transforms have been discussed. Though the image fusion algorithm based on Wavelet Transform has been applied successfully in image processing field, its excellent characteristic in one-dimension can't be extended to two dimensions or multi-dimension simply. In this paper, we introduce a new method based on the curvelet transform which represents edges better than wavelets. We also put forward an image fusion algorithm based on Wavelet Transform and the Curvelet Transform. Region based method is used to select both low and high frequency components after Wavelet and the Curvelet Transform. Finally, the proposed algorithm was applied to get the fused image. Upon Comparison of different fusion methods, RMSE and PSNR were evaluated. This comparison proves that the proposed method is efficient than other methods.
Keywords- Wavelet Transform, Curvelet Transform, Image Fusion, Image Processing, Medical Applications.
Image fusion means integrating the images from different sensors into a new image which meets the required specifications, and the integrated image should be more suitable for human perception or computer subsequent treatment. Medical image fusion has been also a popular research topic. Generally, medical image fusion refers to the matching and fusion between two or more images of the same lesion area from different medical imaging equipments. It aims to obtain complementary information and increase the net amount of information. In clinical diagnosis and treatment, fused images can provide more useful information. It is important for lesion location, diagnosis, making treatment and pathological study.
Computed tomography (CT) and nuclear Magnetic Resonance Imaging (MRI) are complementary on reflecting human body information. In the medical images, CT can clearly reflect the anatomical structure of bone tissues. On the contrary, MRI can clearly reflect the anatomical structure of soft tissues, organs and blood vessels. CT, MRI and other modes of medical images reflect the human information from various angles. In the clinical diagnosis and treatment, the problems about the comparison and synthesis between image CT and MRI were frequently encountered. Though wavelet transform has been explored widely in various branches of image processing, it fails to represent objects containing randomly oriented edges and curves as it is not good at representing line singularities. To overcome these limitations, curvelet transform has been developed.
This paper introduces the Fast Discrete Curvelet Transform (FDCT) and uses it to fuse images, and different kinds of fusion methods are compared at last. The experiments show that the method could extract useful information from source images to fused images so that clear images are obtained.
II. DISCRETE WAVELET TRANSFORM
Discrete wavelet transform is one of the most preferred transforms, since it has the advantage of a time-frequency representation of signals, where Fourier transform is only frequency localized. In DWT, the original image is high-pass filtered yielding three detail images, describing the local changes in horizontal, vertical and diagonal direction of the original image. The image is then lowpass filtered yielding an approximation image which is again filtered in the same manner to generate high and low frequency subbands at the next lower resolution level (Fig.1). This process is continued until the whole image is processed or a level is determined as the lowest to stop decomposition. This continuing decomposition process is known as down sampling and shown in Fig.1.
Fig.1 DWT decomposition tree
The whole decomposition process provides us with an array of DWT coefficients obtained from each subbands at each scale. These coefficients can then be used to analyse the texture patterns of an image. Wavelet subbands obtained from the Lena image using 4 decomposition levels are shown in Fig.2.
Fig.2. A 512'512 image (left) and its DWT transform (right).
Though wavelet transform has been widely accepted, it has several problems. In 2-D space, wavelets cannot capture highly anisotropic elements like the curves of an image effectively as wavelets are not effective at representing line singularities. Besides, discrete wavelet transform only uses 3 directional wavelets; horizontal, vertical and diagonal to capture the image texture information. Images containing a high level of directionality will not be well represented by wavelet spectral domain. The Curvelet transform has been developed to overcome the limitations of wavelet Transform.
III. CURVELET TRANSFORM
The Curvelet transform is the advanced version of wavelet transform. It is predominantly used 2 depict images from different angles and scales. The most important mathematic property concerning curvelet transform is, the Curved singularities can be well approximated with very few coefficients and in a non-adaptive manner - hence the name "curvelets."
The initial approach of curvelet transform implements the concept of discrete ridgelet transform. Since its creation in 1999, ridgelet based curvelet transform has been successfully used as an effective tool in image denoising, image decomposition, texture classification, image deconvolution, astronomical imaging and contrast enhancement, etc. But ridgelet based curvelet transform is not efficient as it uses complex ridgelet transform. In 2005, Cand's et al. proposed two new forms of curvelet transform based on different operations of Fourier samples, namely, Unequally-Spaced Fast Fourier Transform (USFFT) and Wrapping based Fast Curvelet Transform. Wrapping based curvelet transform is faster in computation time and more robust than ridgelet and USFFT based curvelet transforms. Henceforth, Wrapping based Fast Discrete Curvelet Transform (FDCT) is used for fusion of images in this paper.
A. Ridgelet based Curvelet Transform
Basically, curvelet transform extends the ridgelet transform to multiple scale analysis. Therefore, we start from the definition of ridgelet transform. Given an image f (x, y), the continuous ridgelet coefficients are expressed as:
Here, 'a' is the scale parameter where a>0, b?R is the translation parameter and ? ? [0,2p] is the orientation parameter. Exact reconstruction is possible from these coefficients. A ridgelet can be defined as:
where ? is the orientation of the ridgelet. Ridgelets are constant along the lines x cos? + y sin? = const and transverse to these ridges are wavelets. The contrast between wavelet and ridgelet on capturing edge information is shown in Fig. 3. It can be observed that the curvelets, at all scales, capture the edge information more accurately and tightly than wavelets.
Fig.3. Edge representation using wavelet and ridgelet.
In the ridgelet based curvelet approach, input image is first decomposed into a set of subbands each of which is then partitioned into several blocks for ridgelet analysis. During the ridgelet transform, one of the processes is the spatial partitioning which involves overlapping of windows to avoid blocking effects. It results in a large amount of redundancy. Moreover, this process is very time consuming one.
B. Discrete Curvelet Transform
Fast discrete curvelet transform based on the wrapping of Fourier samples has less computational complexity as it uses fast Fourier transform instead of complex ridgelet transform. In this approach, a tight frame has been introduced as the curvelet support to reduce the data redundancy in the frequency domain. Normally, rideglets have a fixed length that is equal to the image size and a variable width, whereas curvelets have both variable width and length and represent more anisotropy. Therefore, the wrapping based curvelet transform is simpler, less redundant and faster in computation than ridgelet based curvelet transform. We now discuss discrete curvelet transform based on wrapping Fourier samples. Curvelet transform based on wrapping of Fourier samples takes a 2-D image as input in the form of a Cartesian array f [m, n] such that 0 = m < M, 0 = n < N and generates a number of curvelet coefficients indexed by a scale j, an orientation l and two spatial location parameters (k1, k2 ) as output. Discrete curvelet coefficients can be defined by:
Here, each is a digital curvelet waveform. This curvelet approach implements the effective parabolic scaling law on the subbands in the frequency domain to capture curved edges within an image more effectively. If we combine the frequency responses of curvelets at different scales and orientations, we get a rectangular frequency tiling that covers the whole image in the spectral domain (Fig.4).
Fig.4. Rectangular frequency tiling of an image with 5 level curvelets.
To achieve higher level of efficiency, curvelet transform is usually implemented in the frequency domain. That is, both the curvelet and the image are transformed and are then multiplied in the Fourier frequency domain. The product is then inverse Fourier transformed to obtain the curvelet coefficients. The process can be described as Curvelet transform = IFFT [FFT (Curvelet) ' FFT (Image)] and the product from the multiplication is a wedge.
The trapezoidal wedge in the spectral domain is not suitable for use with the inverse Fourier transform which is the next step in collecting the curvelet coefficients using IFFT. The wedge data cannot be accommodated directly into a rectangle of size 2j'2j/2 . To overcome this problem, Candes et al. have formulated a wedge wrapping procedure where a parallelogram with sides 2j and 2j/2 and is chosen as a support to the wedge data. The wrapping is done by periodic tiling of the spectrum inside the wedge and then collecting the rectangular coefficient area in the origin. Through this periodic tiling, the rectangular region collects the wedge's corresponding fragmented portions from the surrounding parallelograms (Fig. 5).
Fig.5. Wrapping wedge around the origin by periodic tiling of the wedge data. The angle ? is in the range (p/4, 3p/4).
For this wedge wrapping process, this approach of curvelet transform is known as the 'wrapping based curvelet transform'. The wrapping is illustrated in Fig.6 and explained as following. As shown in Fig.6, in order to do IFFT on the FT wedge, the wedge has to be arranged as a rectangle. The idea is to replicate the wedge on a 2-D grid, so a rectangle in the center captures all the components a, b, and c of the wedge.
Fig.6.Fast discrete curvelet transform to generate curvelet coefficients.
Wedge wrapping is done for all the wedges at each scale in the frequency domain, so we obtain a set of subbands or wedges at each curvelet decomposition level. These subbands are the collection of discrete curvelet coefficients.
IV. IMAGE FUSION ALGORITHM BASED ON WAVELET AND CURVELET TRANSFORM
Images can be fused in three levels, namely pixel level fusion, feature level fusion and decision level fusion. Pixel level fusion is adopted in this paper. We can take operation on pixel directly, and then fused image could be obtained. We can keep as more information as possible from source images. First, we need pre-processing, and then cut the same scale from awaiting fused images according to selected region. Subsequently, we divide images into sub-images which are different scales by Wavelet Transform. Afterwards, Curvelet Transform of every sub-image should be taken. The steps of using Curvelet Transform to fuse two images are as follows:
' Resample and registration of original images, we can correct original images and distortion so that both of them have similar probability distribution. Then Wavelet coefficient of similar component will stay in the same magnitude.
' Using Wavelet Transform to decompose original images into proper levels. One low-frequency approximate component and three high-frequency detail components will be acquired in each level.
' Curvelet Transform of individual acquired low frequency approximate component and high frequency detail components from both of images.
' Now Region based method is used to select both the low and high frequency components for fusion.
' Take inverse DWT and inverse CT, and resulted image is the fusion images.
V. EXPERIMENTAL RESULTS AND ANALYSIS
A. Multi-Focus Image Fusion
We use multi-focus lab images after standard testing. Fig.7 (a) shows left-focus image, the outline of circular clock looks clear. Fig.7 (b) shows right-focus image, the outline of rectangular clock looks clear. Three fusion algorithms are adopted in this paper to contrast fusion effects. We separately use Discrete Wavelet Transform (DWT), Discrete Fast Curvelet Transform (DFCT), and combined wavelet and curvelet transforms which is proposed in this paper. According to DWT and DFCT, We use different fusion standard in different sections. Average operator is used as a fusion standard for low-frequency sub-band. Choosing the fusion operator based the biggest absolute value is used as a fusion standard for three high-frequency sub-bands. Fig.7(c), (d), (e) separately express corresponding fusion results.
Fig.7. Multi-focus lab images and their image fusion. (a) left-focus; (b) right-focus; (c) fused image of DWT; (d) fused image of DFCT; (e) fused image of proposed method.
We adopt Entropy of fused image, Peak Signal to Noise Ratio (PSNR), and Mean Square Error (MSE) to evaluate the fused quality. It is expressed as table I. In the same group of experiments, if Entropy of fused image is bigger, or PSNR in decibels is high or MSE is smaller, it shows that the fusion methods adopted is better.
TABLE I. EVALUATION OF THE MULTI-FOCUS IMAGE FUSION RESULTS
Fusion Method Evaluation of the multi-focus image fusion
MSE PSNR Entropy
DFCT 0.0012011 51.6547 10.8800
DWT 0.0015251 78.0588 10.5756
Proposed Algorithm 0.0010167 81.9617 11.7507
B. Complementary Fusion Image
In medicine, CT and MRI image both are tomographic scanning images. They have different features.
Fig.8. Medical images and their image fusion. (a) CT Image; (b) MRI Image; (c) fused image of DWT; (d) fused image of DFCT; (e) fused image of proposed method.
Fig.8 (a) shows CT image, in which image brightness related to tissue density, brightness of bones is higher, and some soft tissue can't been seen in CT images. Fig.8 (b) shows MRI image, here image brightness related to amount of hydrogen atom in tissue, thus brightness of soft tissue is higher, and bones can't been seen. There are complementary information in these images. We use three methods of fusion forenamed in medical images, and adopt the same fusion standards, Fig.8 (c), (d), (e) separately shows results, and the data of results is expressed as table II.
TABLE II. EVALUATION OF THE COMPLEMENTARY IMAGE FUSION RESULTS
Fusion Method Evaluation of the multi-focus image fusion
MSE PSNR Entropy
DFCT 0.033372 24.3671 10.8800
DWT 0.026705 65.6258 7.3968
Proposed Algorithm 0.017803 68.9071 8.2186
We make simulation experiments by above fusion methods in comparison. The results are expressed as Fig.8. Three algorithms all acquire good fusion results, in which the results of method in this paper have more detail information. The date from table II also shows the same conclusion.
This paper puts forward an image fusion algorithm based on Wavelet Transform and the Wrapping based Curvelet Transform. This method could better describe the edge direction of images, and analyzes feature of images in better manner. Hence, these fusion methods are used in simulation experiments of multi-focus and complementary fusion images. The proposed method is compared with different fusion methods based on MSE and PSNR. The result shows that, proposed method acquires better fusion result.
We would like to heatrfully thank everyone who have magnanimously contributed us in the preparation of this paper. Our special thanks to our supervisor, Mrs. S Dhanalakshmi, who played a major role in shaping of this paper and gave us the most valuable guidance and suggestions.
Without her patient instructions and instant moral support this paper wouldn't have materialised. We also like to convey our heartfelt thanks to our H.O.D., Mrs. S Saraswathi Janki for being the guiding force. We finally would express our sincere, immense gratitude and thanks to our beloved parents for putting up with us without a word of complaint.