Image Enhancement By Using Wavelet Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

        The present paper proposes a new technique for enhancing underwater image. This technique uses the discrete wavelet transform. The first step is converting of the colored image from the RGB color space into the HSV color space. The second step is applying the discrete wavelet transform on the hue layer of the HSV image. The wavelet transform divides the image data into four data area, one of these data are very important data and the other areas for the detailed information. The area of important data will be used in the enhancement operation. It is stretching to substitute the loss of colors and then four areas are merged again to get the image in the wavelet domain. The image in the spatial domain will be created by using the inverse discrete wavelet transform. The two layers saturation and value of HSV are also stretched to substitute the light absorption.

Keywords: Underwater, Image Enhancement, HSV, Wavelet, Stretching.


Image processing is one of the fastest growing fields of computer science. The concern of most researchers in this aspect has been to improve the quality of the images or to compute some information that leads to get lost information throughout certain conditions such as observation. Image processing in some applications need an enhancement process to clear image. Image enhancement is one of the most important tasks in the image processing field. It improves the quality of images for human vision.   

Remove blurring and noise, increasing contrast, and revealing details are examples of enhancement operations. In other words, it substitutes the loss of images information. The loss of information due to incidental circumstances through image acquiring. The main aim of the image enhancement is to process any image to produce a new image better than the original. The difficulty of the image enhancement depends on the environment of the image. One most difficulty is underwater environment [1, 2, 3]. Underwater image suffers from limited range, non uniform lighting, low contrast and color diminished because of specific transmission properties of light in the water. An important factor must manipulated is the light attenuation. Light attenuation limits the visibility distance, about 20 meters in clear water and five meters or less in turbid water [3, 4]. The light attenuation problem is caused by the absorption and scattering and it is aggravated whenever we go deep in the water. Absorption removes light energy whereas the scattering changes the direction of light path. Absorption and scattering affects are due to the water itself and to other components such as small observable floating particles. Another problem in the underwater environment is the little color variations [5, 6, 7]. This problem is caused by the color dropping off depending on the wavelength of the colors [2]. The proposed technique in this paper uses the discrete wavelet transform to analyze the image and then process underwater image difficulties.

The present paper is organized as follow: Section two views a literature survey. Section three describes the problems of the underwater images. Section four and five show the RGB and HSV color spaces respectively. Section six shows the discrete wavelet transform. The Histogram is presented in the Section seven. Section eight presents the proposed technique. Experiments and results are displayed in the section nine. Section ten describe of conclusion.

2. Literature Survey

(Tristan 2005):

In 2005 he proposed techniques for enhancing video image sequences to detect fish in the underwater environment. He used the histogram equalization to improve the contrast and visibility. He also used a median filter for noise removal. The edge detection algorithm "Sobel" was used to detect the background [9].

(Weilin Hou, Alan D. Weidemann, Deric J. Gray 2005) :

In 2005 they worked in image gray-level by modulation transfer function (MTF) of an optical system gave the details and precise information regarding the system behavior. Underwater imageries can be better restored with the knowledge of the system MTF or the point spread function (PSF), the Fourier transformed equivalent, extending the performance range as well as the information retrieval from underwater electro-optical system [10].

(Stephane et al. 2006):

In 2006 they proposed an automatic algorithm to pre-process underwater images. It reduces underwater perturbations, and improves image quality. It is comprised of several successive independent processing steps which correct non uniform illumination, suppress noise, enhance contrast and adjust colors. They used the wavelet transform for the denoising process. Also, they used the contrast stretching for the color contrast and histogram equalization for the color correction [1].

(Jyoti and Paresh 2007):

In 2007 they proposed a Image processing method for enhancing various slow motion underwater, ground, and satellite images. In the suggested method after noise smoothing & contrast stretching, image is equalized for better contrast using Histogram Equalization (HE) to change the mean brightness of the image [7].

(Kashif et al. 2007):

In 2007 they proposed an approach based on slide stretching. This approach is twofold. First, they applied the contrast stretching of RGB algorithm to equalize the color contrast in image. Secondly, they used the intensity and saturation stretching of HSI to increase the true color and solve the problem of lighting [2].

6. (W. Hou, A. Weidemann, and D. Gray 2008):

To reduce blur and improve imagery effectively, it is critical to incorporate knowledge of the optical properties of the water to better model the degradation process. The amount of blurring in an image can be described by how much blur a point-source will introduce over the imaging range. This property is the point-spread function (PSF) and its Fourier transformed form is the modulation transfer function (MTF), which describes how fast the details of an image degrades in a given environment [16].

7. (Liu Chao & Meng Wang 2009):

 In 2009 they presented the affect of turbid water can be removed and the original clarity of images can be unveiled [17].

8. (Dr.G.Padmavathi, Dr.P.Subashini, Mr.M.Muthu Kumar and Suresh Kumar Thakur 2010):

In 2010 they proposed Different filtering techniques are available in the literature for pre-processing of under water images. The filters used normally improve the image quality, suppress the noise, preserves the edges in an image, enhance and smoothen the image. Therefore an attempt has been made to compare and evaluate the performance of three famous filters namely, homomorphic filter, anisotropic diffusion and wavelet de noising by average filter used for under water image pre-processing [12].

9- (St´ephane Bazeille, Isabelle Quidu, Luc Jaulin, Jean-Phillipe Malkasse 2010) :

It reduces underwater perturbations, and improves image quality. It is comprised of several successive independent processing steps which correct non uniform illumination, suppress noise, enhance contrast and adjust colors. Performances of filtering will be assessed using an edge detection robustness criterion [13].

3. Underwater Image Problems

Water in a summary is 800 times denser than air. As soon as, light enters the water, and interacts with its molecules then atoms waiting will cause loss of light, color changes, diffusion, loss of contrast and other affects. The present paper shows two problems related to the underwater images [8, 9, 10]. These problems are light absorption and loss of colors. The first one is due to many reasons such as the time of day and the nature, clarity and depth of water to acquire image. The amount of light that reflects upward relies on sun heights robustly (place on Earth, time of day and season) and the conditions of the sea. Sea stormy will absorbs much light whereas sea mirror-like reflects a lot of it [11, 10]. The depth in such image plays a master role for light absorption. In average, for every 10m in depth there will be a loss of half the light (i.e. at 10m there will be 50% of the light found at the surface, at 20m there will be only 25% as much light as there was at the surface, and 12.5% as much light at 30m!,..Etc ). [3, 12, 13]

The second problem is the loss of colors. Water particles interact with light by absorbing specific wave lengths. The color will absorbs more if wave length is long whereas the short wave length moves longer in the water. First red and orange disappears, later yellow, green and purple and last blue. Loss of red is dramatic and already noticeable at 50cm, at five meters depth 90% has disappeared.  The loss of color varies depend on distance. Figure (1) illustrates the loss of colors [6, 14, 15, 18, 19, 20].

Figure (1): Loss of Colors

The RGB Color Space

Generally, digital image is a two-dimensional array of numbers, each located in a position referred to point element or pixel. The color or the grayscale display has given picture element depends on the value stored in each pixel.  

Color images can be modeled as three-band monochrome image data, where each band of image corresponds with different color. The RGB color model is an additive color model consists of red, green, and blue, then light is adding to those colors in various ways to reproduce a broad array of colors [3,4]. 

The RGB color model is the most common way to encode color in computing, and several different binary digital representations in use. The main characteristic of them are the quantization of the possible values per component (technically a sample) by using only integer numbers within some range, usually form 0 to a some power of two minus one (2n - 1) to fit them into some bit groupings.

The HSV Color Space

The HSV color space (Hue, Saturation, Value) is often used by people who are selecting colors (e.g., of paints or inks) from a color wheel or palette, because it corresponds better to how people experience color than the RGB color space does. As hue varies from 0 to 1.0, the corresponding colors vary from red through yellow, green, cyan, blue, magenta, and back to red, so that there are actually red values both at 0 and 1.0. As saturation varies from 0 to 1.0, the corresponding colors (hues) vary from unsaturated (shades of gray) to fully saturated (no white component). As value, or brightness, varies from 0 to 1.0, the corresponding colors become increasingly brighter. The following figure illustrates the HSV color space [3,4].

Figure (2): HSV Color Space

As the hue plane image in the proceeding figure illustrates, hue values make a linear transition from high to low. If you compare the hue plane image against the original image, you can see that shades of deep blue have the highest values, and shades of deep red have the lowest values.  


Saturation can be thought of as the purity of a color. As the saturation plane image shows, the colors with the highest saturation have the highest values and are represented as white. In the center of the saturation image, notice the various shades of gray. These correspond with a mixture of colors; the cyan, greens, and yellow shades are mixtures of true colors. Value is roughly equivalent to brightness, and you will notice that the brightest areas of the value plane correspond with the brightest colors in the original image. 

Discrete Wavelet Transform

 A wavelet can be explained as a brief topic duration wave. The fundamental concept of wavelet analysis is the work of a wavelet as a kernel function in integral transforms and in sequential extend as same as sinusoid is used in a Fourier analysis. Unlike Fourier analysis, which uses strength of function, Wavelet analysis depends on wavelets on a rather huge functional form.  An essential element wavelet is termed a mother wavelet. The discrete wavelet transform of a signal x (t) is given by the following equation:

W (b,a) = 1/ïƒ-a x(t)*(h(t-b)/a). … (1)







Figure () A single octave filter bank tree for

D1Where b is the time factor, an is the scale factor h(t) is the wavelet basis function. Properties of wavelet transforms are heavily dependant on their basis wavelet functions. A 1-octave filter bank tree for the I-D DWT is shown in Figure-1. The DWT can also be viewed as a kind of multi-resolution decomposition of a sequence. By exploring the sub-band scheme recursively, a fast DWT can be constructed.

From the figure 1, h (n) implies the high pass function (also known as wavelet function) and g (n) implies the low pass function (also known as scaling function) and by 2 represents sub sampled by dropping one every two samples. Assume an input sequence x(n) contains N samples, then the output sequence length should also be N.The first octave computes N/2 samples, the second octave computes N/4 samples ,….and so on.

The discrete wavelet transform (DWT) of 2D performs an octave subband decomposition of an image. The output of the first analysis stage is the low-low (LL) subband (an approximation of the original image); the high-low (HL) subband (the horizontal detail); the low-high (LH) subband (the vertical details); and, the high-high (HH) subband (the diagonal details), as shown in Figure (3). The synthesis stage reconstructs the image [3,5].

Wavelet analysis allows the use of long time intervals where we want more precise low-frequency information, and shorter regions where we want high-frequency information [4]. The low-frequency content is the most important part. It is what gives the signal its identity. The high-frequency content, on the other hand, imparts flavor or nuance. Wavelet decomposition for Lena's image in 1-level of wavelet transform is shown in Figure (4).









Figure (3) : Wavelet Subband Images of 2-D, 1-level.

Figure (4): The original image and 1-level wavelet decomposition of Lena's image.

Subband coding is a coding strategy that tries to isolate different characteristics of a signal in a way that collects the signal energy into few components. This is referred to as energy compaction, which is desirable because it is easier to efficiently code these components than the signal itself. This kind of two-dimensional DWT leads to a decomposition of approximation coefficients at level j in four components: the approximation at level j + 1, and the details in three orientations (horizontal, vertical, and diagonal). The present paper utilizes the DWT to enhance the underwater images [6].


Figure (5): (a) Grayscale image (b) Image histogram.


(b)An image histogram is a chart that shows the distribution of intensities in an indexed or grayscale image [4]. You can use the information in a histogram to choose an appropriate enhancement operation. For example, if an image histogram shows the range of little intensity values, you will use an intensity adjustment function to spread the values across in wide range. Figure (5) clear the histogram of an image.

8. The Proposed Technique

As illustrated in section (3), the underwater images suffer from light absorption and loss of colors. The present paper proposes a new technique to solve those two problems in underwater image. This technique uses the HSV color space rather than RGB color space. The HSV color space divided to hue, saturation, and value, each of them in a separate layer. This technique enables to process each of layers lonely.

The first step of this technique is transforming the hue layer which represents the color values of the image from the spatial domain into the wavelet domain by using DWT. Second, the approximation sub band (LL) of the hue layer in the wavelet domain is stretched to substitutes the loss of colors by shifting the color values from the blue color area into the other color areas. The stretching algorithm uses the linear scaling function to the pixel values. Each pixel is scaled using the following function [8, 9]:

Po = ( Pi - c) x (b - c) / (d - c) + a


Po is the normalized pixel value.

Pi is the considered pixel value.

a is the minimum value of the desired range.

b is the maximum value of the desired range.

c is the lowest pixel value currently present in the image.

d is the highest pixel value currently present in the image.

Each Pi is passed in wavelet stage and than moved to stretch operation.

Input Underwater Image

RGB color space

Convert to HSV color space

Transform (Hue) to wavelet domain

Stretch Approximation of (Hue)

Transform stretched (Hue layer) back to spatial domain

Stretch saturation layer

Stretch value layer

Convert to RGB color space

Output Enhanced Image

Figure (6): Flowchart of the proposed technique

This stretch manipulates the dominance of the blue color in the underwater image. The stretched layer will be back to the spatial domain. This process substitutes the loss of colors uniformly.

Then to increase the image contrast, the saturation layer is also stretched to colors saturate. Finally the brightness of the image is increase by stretching method that used in this technique and in the same time substitute the loss of light underwater. Figure (6) illustrate the flowchart of the proposed technique.

9. Experimental Results

In this section, a several experiments which are used to examine our proposed wavelet based underwater image enhancement technique. It shows good results as obvious through the histograms of the original and the enhanced images. Figures (7, 8, 9, 10, 11, 12) through show the experimental results.

Figure (7) Experiment 1

color to increase the image contrast.iimage spatial domainin a separate layer.

Original Image

Enhanced Image

Figure (8) Experiment 2

Original Image

Enhanced Image

Figure (9) Experiment 3

Original Image

Enhanced Image

Figure (10) Experiment 4

Original Image

Enhanced Image

Figure (11) Experiment 5

Original Image

Enhanced Image

Figure (12) Experiment 6

Figure () A single octave filter bank tree for

Enhanced Image

10. Conclusion

In the present paper, the three layers of the image (hue, saturation, and value) are stretching to solve two major problems related to the underwater images. These problems are light absorption and colors loose. The stretching process is applying directly to the layers saturation and value in the spatial domain whereas it is applied to the hue layer in the wavelet domain. The direct implementation of stretching process on the hue layer in the spatial domain causes random color substitution. Figure (11, 12, 13) clear the enhancement without using wavelet transform.

Original image

Enhanced image without wavelet

Figure (11): Stretching colors without wavelet

Original image

Enhanced image without wavelet

Enhanced image without wavelet

Figure (11): Stretching colors without wavelet

Original image

Enhanced image without wavelet