X Ray Images Preprocessing And Enhancement Biology Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

X-ray image enhancement together with preprocessing produce high quality image the preprocessing remove noise and debluring the image and after preprocessing enhancement in spatial domain as well as Frequency domain produce enhanced image which give better visual effect.

Preprocessing of an image involves the removing of noise and debluring of the image using different filters, and after the completion of first step the X-ray image is ready for enhancement. The noise in x-ray images makes it difficult to enhance and the process of enhancement doesn't work properly.

The basic operators of binary morphology are erosion, dilation, opening and closing. As the names indicate, the erosion operator makes a region smaller by eroding its borders, while the dilation operator enlarges a region. The opening and closing operators combine the two previous operators. More precisely, an opening operation first applies erosion and then dilation, and is used both to eliminate small objects inside and outside the lung and to highlight the separation between distinct regions so as to make the lung border recognition easier. Instead, a closing operation consists of a dilation followed by erosion. It enhances borders and fills the gaps in the borders which can cause problems in the border detection phase.[17] Closing

Closing on the other hand is an operation whereby dilation is done first and then followed by erosion, the same structuring element is being applied for both operations dilation and erosion. This operation is useful in bridging narrow breaks, eliminate small holes, and fill small gaps. This operation can be written as:

4.2.2. Noise removing filter

A smoothing Filter is used to remove noise from an image. Each pixel is represented by three scalar values representing the red, green, and blue chromatic intensities. At each pixel studied, a smoothing Filter takes into account the surrounding pixels to derive a more accurate version of this pixel. By taking neighboring pixels into consideration, extreme "noisy" pixels can be replaced. However, outlier pixels may represent uncorrupted fine details, which may be lost due to the smoothing process.

4.2.2. 1.Median filter

The Mean Filter is a linear filter which uses a mask over each pixel in the signal. It is calculated as follows:

Each of the components of the pixels which fall under the mask are averaged together to form a single pixel. This new pixel is then used to replace the pixel in the signal studied. The Median Filter is performed by taking the magnitude of all of the vectors within a mask and sorting the magnitudes, the pixel with the median magnitude is then used to replace the pixel studied. When noise affects a point in a grayscale image, the result is called "salt and pepper" noise.

4.2.2. 2.Wiener filter

For any of a number of reasons a digital signal may become corrupted with noise. The introduction of noise into a signal is often modeled as an additive process. But by making assumptions regarding the signal and noise characteristics and limiting ourselves to a linear approach, a solution can be formulated known as the Wiener filter. When the filter is convolved with the corrupted signal the original signal is recovered. The noise levels are reduced but that much of the sharp image structure has also been lost, which an unfortunate but expected side effect is given that the Wiener filter is low-pass in nature.

4.2.2. 3.Average filtering

If the Gaussian noise has mean 0, then we would expect that an average filter would average the noise to 0. The larger the size of the filter mask, the closer to zero. Unfortunately, averaging tends to blur an image. However, if we are prepared to trade off blurring for noise reduction, then we can reduce noise significantly by averaging filter.

4.2.2. 3.Arithmetic Mean filter

Arithmetic mean filter use for noise removing from an image it is a very simple one and is calculated as follows:

This is implemented as the simple smoothing filter but it blurs the image to remove noise. There are different kinds of mean filters all of which exhibit slightly different behaviour:

Geometric Mean

Harmonic Mean

Contraharmonic Mean

4.3. Spatial domain X-ray image enhancement

Spatial domain methods directly operate on the pixels, Image processing in the spatial domain can be expressed by:

G(m, n) =T(f(m, n))

where f(m,n) is the input image, g(m,n) is the processed image, and T is the operator defining the modification process. The operator `T' is typically a single-valued and monotone function that can operate on individual pixels or on selective regions of the image. Many powerful enhancement processing techniques can be formulated in the spatial domain of an image

4.3.1. Unsharp filter

Unsharp masking is one of the techniques typically used for edge enhancement. In this approach, a smoothed version of the image is subtracted from the original image, hence tipping the image balance toward the sharper content of the image. The process can be defined by:

where h(m,n) is a smoothing kernel, and a defines the degree of edge enhancement. The above equation can be re-arranged as the following:

which describes the process as adding edge information to the image for sharpening.

4.3.2. Sobel filter

Sobel filtering is used to detect the horizontal and vertical edges of an image. The Sobel operator performs a 2-D spatial gradient measurement on an image and so emphasizes regions of high spatial frequency that correspond to edges. Typically it is used to find the approximate absolute gradient magnitude at each point in an input grayscale image.

The sobel filter uses the mask to approximate digitally are described as
















- 1





Figure 4.2. a)Sobel filter mask of vertical b) Sobel filter mask of horizontal

The equation is below

Gx = ( Z7+2Z8+Z9 )-( Z1+2Z2+Z3 )

Gy = ( Z3+2Z6+Z9 )-( Z1+2Z4+Z7 )

4.3.3. Contrast stretching

Contrast stretching (often called normalization) is a simple image enhancement technique that attempts to improve the contrast in an image by `stretching' the range of intensity values it contains to span a desired range of values, e.g. the full range of pixel values that the image type concerned allows. It differs from the more sophisticated histogram equalization in that it can only apply a linear scaling function to the image pixel values. As a result the `enhancement' is less harsh. The contrast stretching in this application is done by using imadjust.

4.3.4. Histogram equalization

Histogram equalization techniques provide a complicated method for modifying the dynamic range and contrast of an image by altering that image such that its intensity histogram has a desired shape. Unlike contrast stretching, histogram modeling operators may employ non-linear and non-monotonic transfer functions to map between pixel intensity values in the input and output images. Histogram equalization employs a monotonic, non-linear mapping which re-assigns the intensity values of pixels in the input image such that the output image contains a uniform distribution of intensities

4.3.5. Averaging filter

The average filter calculates the average of the grey-level values within a rectangular filter window surrounding each pixel. This has the effect of smoothing the image and eliminates noise. The mask parameter specifies the area within the input channel which will be processed. Only the area under mask will be filtered and the rest of the image will be unchanged. The dimensions of the filter window must be odd.

4.4. Frequency domain X-ray image enhancement

Frequency domain is space defined by the values of Fourier transform and its frequency variables Procedure of filtering in the frequency domain is let the image be f(x, y), the frequency transfer function of the filter be H(U,V ), and the output image be g(x, y), then it can easily be shown that

G(U,V )= H(U,V ) F(U,V )

To perform this task and find the output image g(x, y), one can follow the steps given below

Multiply f(x, y) by (-1)x+y to obtain f(x, y)(-1)x+y

Find the Fourier transform of the image obtain from the result of step 1

Multiply the resultant faurier transform with the required filter

Compute inverse Fourier transform to obtain g(x, y) (-1)x+y

Obtain g(x, y) by multiply the result of (4) with (-1)x+y.

4.4.1Sharpening frequency domain filters highpass filters

Transfer function of a Butterworth highpass filter of order n with cutoff frequency D0 from the origin is defined as

H(u, v) =1/1 + [D0/D(u, v)]2n

Butterworth high-pass filter can be used as an edge detector, or in a sharpening filter

4.4.2.High-frequency emphasis

Multiply highpass filter by a constant and add an offset so that the DC term is not eliminated by the filter Transfer function is

Hhfe(u, v) = a + bHhp(u, v)

where a > 0 and b > a

Reduces to high-boost filtering when a = (A − 1) and b = 1 When b > 1, high frequencies are emphasized

4.4.3.Homomorphic filtering

Images normally consist of light reflected from objects. The basic nature of the image F(x,y) may be characterized by two components: (1) the amount of source light incident on the scene being viewed, and (2) the amount of light reflected by the objects in the scene. These portions of light are called the illumination and reflectance components, and are denoted i(x,y) and r(x,y) respectively. The functions i and r combine multiplicatively to give the image function F:

F(x,y) = i(x,y)r(x,y),

Suppose, however, that we define

z(x,y)= lnF(x,y)

=ln i(x,y)+ ln r(x,y)


F( z(x,y)) = F (lnF(x,y))

= F(ln i(x,y)+ ln r(x,y))


Z(w,v)=I(w,v)+ R(w,v)

where Z, I and R are the Fourier transforms of z, ln i and ln r respectively. The function Z represents the Fourier transform of the sum of two images: a low frequency illumination image and a high frequency reflectance image. If we now apply a filter with a transfer function that suppresses low frequency components and enhances high frequency components, then we can suppress the illumination component and enhance the reflectance component. Thus

S(w,v)=H(w,v) Z(w,v)

= H(w,v) I(w,v)+ H(w,v) R(w,v)

where S is the Fourier transform of the result. In the spatial domain

s(x,y) = F¯¹ (S(w,v))

= F¯¹ (H(w,v) I(w,v))+ F¯¹ (H(w,v) R(w,v))

By letting

i'(x,y)= F¯¹ (H(w,v) I(w,v))


r'(x,y)= F¯¹ (H(w,v) R(w,v))

we get

s(x,y) = i'(x,y) + r'(x,y).

Finally, as z was obtained by taking the logarithm of the original image F, the inverse yields the desired enhanced image Fˆ: that is

Fˆ(x,y) = exp[s(x,y)]

= exp[i'(x,y)]exp[r'(x,y)]

= i˳(x,y) r˳(x,y)

4.4.4.Unsharp masking

An "unsharp mask" is actually used to sharpen an image, contrary to what its name might lead you to believe. Sharpening can help you emphasize texture and detail in digital images. Unsharp masks are probably the most common type of sharpening, and can be performed with nearly any image editing software . An unsharp mask cannot create additional detail, but it can greatly enhance the appearance of detail by increasing small-scale

Unsharp masking Generate a sharp image by subtracting its blurred version from itself

Obtain a highpass-filtered image by subtracting its lowpass filtered version from itself

fhp(x, y) = f(x, y) − flp(x, y)