# Image Restoration And Enhancement Methods Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

In today's communications networks multimedia is a growing field. There are increasing demands on incorporating visual aspect to other modes of communications. It is therefore unable to be avoided to have situations in which the video and transmitted images being corrupted or degraded in their perceptual quality by variety of ways.

## What is Digital Image Processing?

An image is defined as two- dimensional function, f(x,y), where x,y are plane coordinates and the amplitude of 'f' at any pair of coordinates (x,y) is called the intensity or gray level of the image. When x, y and the intensity values of f are all finite and discrete quantities, we call the image a digital image. To processing the image by means of computer algorithms is called as digital image processing. As compared to analog image processing, digital image processing has many advantages. It can avoid problems such as signal distortion, image degradation and build-up of noise during processing.

## Image Restoration and Enhancement Methods:

Now day's digital images have covered the complete world. Images are acquired by photo electronic or photochemical methods. The sensing devices tend to reduce a quality of the digital images by introducing the noise and blur due to motion or misfocus of camera.

One of the first applications of digital images was in the news paper industry, when pictures were sent by submarine cable between New York and London. Introduction of cable picture transmission system in the early 1920's reduced the time required to transport a picture across Atlantic from more than a week to less than three hours.

Some of the initial problems in improving the visual quality of these early digital pictures were related to the selection of printing procedures and distribution of intensity levels.

Digital image processing techniques began in the late 1960s and early 1970s to be used in medical imaging, remote Earth resources observations and astronomy.

Tomography was invented independently by Sir Godfrey N. Hounsfield and Professor Allan M.Cormack who shared the 1979 Nobel Prize in medicine for their invention. But, X-rays were discovered in 1985 by Wilhelm Conrad Roentgen.

Geographers use the similar technique to study the pollution patterns from aerial and satellite imagery.

Image enhancement and restoration procedures are used to process the degraded images of unrecoverable objects or experimental results too expensive to duplicate.

The use of a gray level transformation which transforms a given empirical distribution function of gray level values in an image into a uniform distribution has been used as an image enhancement as well as for a normalization procedure.( I. Pitas)

Image enhancement refers to increase the image quality by sharpening certain image features (edges, boundaries and contrast) and reducing the noise. Digital image enhancement and restoration are two dimensional filters. They are broadly classified into linear digital filters and non linear filters. Linear digital filter can be designed or implemented either spatial domain or Frequency domain. (K.S. Thyagarajan)

In Spatial Domain methods refers to the image plane itself .Image processing methods, spatial domain methods are based on direct manipulation of pixels in an image. The intensity transformations and spatial filtering are two principal categories of spatial domain methods.

In Frequency domain methods, first image is transformed to frequency domain. It means that, the Fourier transform of the image is computed and performed all processing on the Fourier transform of the image. Finally Inverse Fourier transform is performed to get the resultant image. (Rafael C.Gonzalez and Richard E.Woods)

Image Enhancement Techniques are

Median filtering

Neighbourhood averaging

Low pass filtering

Histogram techniques

In 1980, recent work on c.c.d. scanners is reviewed and solid-state scanners which include on-chip signal processing functions are described. Future trends are towards smart' scanners; these are scanners with on-chip real-time processing functions, such as analogue-to-digital conversion, thresholding, data compaction, edge enhancement and other real-time image processing functions.( Chamberlain, Savvas G,1980)

The image enhancement algorithm first separates an image into its lows (low-pass filtered form) and highs (high-pass filtered form) components. The lows component then controls the amplitude of the highs component to increase the local contrast. The lows component is then subjected to a non-linearity to modify the local luminance mean of the image and is combined with the processed highs component. The performance of this algorithm when applied to enhance typical undegraded images, images with large shaded areas, and also images degraded by cloud cover will be illustrated by way of examples. (Peli, T.; Jae Lim;1981)

Enhancement algorithms based on local medians and interquartile distances are more effective than those using means and standard deviations for the removal of spike noise, preserve edge sharpness better and introduce fewer artifacts around high contrast edges. They are not as fast as the mean-standard deviation equivalents but are suitable for large data sets treated in small machines in production quantities.( Scollar, I.; Huang, T.; Weidner, B.;1983)

Filtering CT images to remove noise, and thereby enhance the signal-to-noise ratio in the images, is a difficult process because CT noise is of a broad-band spatial-frequency character, overlapping frequencies of interest in the signal.A measurement of the noise power spectrum of a CT scanner and some form of spatially variant filtering of CT images can be beneficial if the filtering process is based upon the differences between the frequency characteristics of the noise and the signal. For evaluating the performance, used a percentage standard deviation, an index representing contrast, a frequency spectral pattern, and several CT images processed with the filter. (Okada, Masahiko.1985)

A two-dimensional least-mean-square (TDLMS) adaptive algorithm based on the method of steepest decent is proposed and applied to noise reduction in images. The adaptive property of the TDLMS algorithm enables the filter to have an improved tracking performance in nonstationary images. The results presented show that the TDLMS algorithm can be used successfully to reduce noise in images. The algorithm complexity is 2(NÃ-N) multiplications and the same number of additions per image sample, where N is the parameter-matrix dimension. The algorithm can be used in a number of two-dimensional applications such as image enhancement and image data processing.( Hadhoud, M.M.; Thomas, D.W.;1988)

Image processing techniques are used to determine the range and alignment of a land vehicle. The approach taken is to establish a state vector of quantities derived from an image sequence, and to refine this over the mission. The image processing techniques applied fall into the generic categories of enhancement, detection, segmentation, and classification. Approaches to estimating the alignment and range of a vehicle in computationally efficient ways are presented. The estimates of quantities extracted from single image frames are subject to errors. This approach facilitates the integration of results from multiple images, and from multiple sensor systems.( Atherton, T.J.; Nudd, G.R.; Clippingdale, S.C.; Francis, N.D.; Kerbyson, D.J.; Vaudin, G.J.B.1990)

The JPEG coder has proven to be extremely useful in coding image data. For low bit-rate image coding (0.75 bit or less per pixel), however, the block effect becomes very annoying. The edges also display wave-like' appearance. An enhancement algorithm is proposed to enhance the subjective quality of the reconstructed images. First, the pixels of the coded image are classified into three broad categories: (a) pixels belonging to quasi-constant regions where the pixel intensity values vary slowly, (b) pixels belonging to dominant-edge (DE) regions which are characterized by few sharp and dominant edges and (c) pixels belonging to textured regions which are characterized by many small edges and thin-line signals. An adaptive mixture of some well-known spatial filters which uses the pixel labeling information for its adaptation is used as the adaptive optimal spatial filter for image enhancement. (Kundu, A.1995)

The videotexts are low-resolution and mixed with complex backgrounds; image enhancement is a key to successful recognition of the videotexts. Especially in Hangul characters, several consonants cannot be distinguished without sophisticated image enhancement techniques. In this experiment, after multiple videotext frames containing the same captions are detected and the caption area in each frame is extracted, five different image enhancement techniques are serially applied to the image: multi-frame integration, resolution enhancement, contrast enhancement, advanced binarization, and morphological smoothing operations and tested the proposed techniques with the video caption images containing both Hangul and English characters from various video sources such as cinema, news, sports, etc. The character recognition results are greatly improved by using enhanced images in the experiment. (Sangshin Kwak; Yeongwoo Choi; Kyusik Chung, 2000).

The use of an adaptive image enhancement system that implements the human visual system (HVS) has the properties for contrast enhancement of X-ray images. X-ray images are poor quality and are usually interpreted visually. The HVS properties considered are its adaptive nature, multichannel mechanism and high nonlinearity. This method is adaptive, nonlinear and multichannel, and combines adaptive filters and homomorphic processing.

The median filtering method is a simple and efficient way to remove impulse noise from digital images. This novel method has two stages. The first stage is to detect the impulse noise in the image. In this stage, first one identify the noise pixel and second one the pixels are roughly divided into two classes, which are "noise-free pixel" and "noise pixel". Then, the second stage is to eliminate the impulse noise from the image. In this stage, only the "noise-pixels" are processed. The "noise -free pixels" are directly copied to the output image. Here, hybrid of adaptive median filter with switching median filter method is used. The adaptive median filter framework in order to enable the flexibility of the filter to change it size accordingly based on the approximation of local noise density. The switching median filter framework in order to speed up the process and also allows local details in the image to be preserved. (Kong, NSP, Theam Foo Ng, 2008)

One of the advantages of Level-2 Improved tolerance based selective arithmetic mean filtering technique is that this filtering technique is to detect and remove the noisy pixels and restore the noise free information. However the removal of impulse noise is often accomplished at the expense of blurred and distorted features of edges. Therefore it is necessary to preserve the edges and fine details during filtering. (Deivalakshmi.S, Palanisamy.P, 2010)

An efficient non-linear cascade filter is used to removal of high density salt and pepper noise in image and video. This method consists of two stages to enhance the filtering. The first stage is the Decision based Median Filter (DMF) which is used to identify pixels likely to be contaminated by salt and pepper noise and replaces them by the median value. The second stage is the Unsymmetrical Trimmed Filter, either Mean Filter (UTMF) or Midpoint Filter (UTMP) which is used to trim the noisy pixels in an unsymmetrical manner and processes with the remaining pixels The basic idea is that, though the level of denoising in the first stage is lesser at high noise densities, the second stage helps to increase the noise suppression. Hence, this method is very suitable for low, medium as well as high noise densities even above 90%. This algorithm shows better image and video quality in terms of visual appearance and quantitative measures. ( Balasubramanian, S. Kalishwaran, S. Muthuraj, R. Ebenezer, D. Jayaraj, V.2009)

The enhancement algorithm enhances CR image detail and CR image enhanced has good visual effect, so the method id suit for edge detail enhancement of CR medicine radiation image. (Zhang, Ming-Hui; Zhang, Yao-Yu, 2010).

Three dimensional TV is considered as next generation broadcasting service.TOF sensors are a relatively new technology allowing real time capture of both photometric and geometric scene information. In order to generate the natural 3D video, first we develop a practical pipeline including TOF data processing and MPEG-4 based data transmission and reception. Then we acquire colour and depth videos from TOF range sensor. Then Alpha matting and enhancement are performed to handle fuzzy and hairy objects (Ji-Ho Cho Sung-Yeol Kim Lee, 2010).

## Median Filtering:

Median Filtering is a non -linear signal enhancement technique for the smoothing of signals, the suppression of impulse noise, and preserving of edges. In the one dimensional case it consists of sliding a window of an odd number of elements along the signal, replacing the centre sample by the median of the samples in the window.

Noise is any undesirable signal. Noise is everywhere and thus we have to learn to live with it. Noise gets introduced into data via any electrical system used for storage, transmission, and/or processing. In addition, nature will always play a "noisy" trick or two with data under observation.

When encountering an image corrupted with noise you will want to improve its appearance for a specific application. The Techniques applied are application-oriented. Also, different procedures are related to the types of noise introduced to the image. Some important types of noise are: Gaussian or white, Rayleigh, Salt-pepper or impulse noise, periodic, sinusoidal or coherent, uncorrelated, and granular.

In statistics, a median is described as the numeric value separating the higher half of a sample, a population, or a probability distribution, from the lower half. The median of a finite list of numbers can be found by arranging all the numbers from lowest value to highest value and picking the middle one.

For example:

The observations are [7,5,6,8,1,3,8,5,4].

First, we are arranging in ascending order or lowest value to highest value.

[1, 3, 4, 5, 5, 6, 7, 8, 8]

Then the middle one is picked. Here, number of observations n=9, it is an odd number.

The middle value=5.

So, the median =5.

If there is an even number of observations, then there is no single middle value; the median is then usually defined to be the mean of the two middle values.

For example: observations are [7,5,6,8,1,3,8,5,4,6].

First, we are arranging in ascending order or lowest value to highest value.

[1, 3, 4, 5, 5, 6, 6, 7, 8, 8]

Then the middle one is picked. Here, number of observations n=10, it is an even number.

So, averaging the observation 5 and 6 and gets the median value.

The observation values are 5 and 6.

The averaging value of 5 and 6 gives 5.5.

So, the median =5.5.

Most scanned images contain noise caused by the scanning method (sensor and its calibration-electrical components, radio frequency spikes) this noise may look like dots of black and white.

Median filter helps us by erasing the black dots, called the Pepper, and it also fills in white holes in an image, called salt "Impulse Noise". It's like the mean filter but is better in pixels and will not affect the other pixels significantly. This means that mean does that.

Preserving sharp edges

Median value is much like neighbourhood

Median filtering is popular in removing salt and pepper noise and works by replacing the pixel value with the median value in the neighbourhood of that pixel. When applied on:

We do brightness -ranking by first placing the brightness values of the pixels from each neighbourhood in ascending order.

The median or middle value of this ordered sequence is then selected as the representative brightness value for that neighbourhood.

## Median Filter Action:

The median filter is also sliding -window spatial filter, but it replaces the centre pixel value in the window by the median of all pixel values in the window. As for the mean filter, the kernel is usually square but can be any shape rectangular, circular, etc depends on an image. An example of median filtering of a single 3*3 window of values is shown in figure.

Unfiltered Values

6

2

0

3

97

4

19

3

10

To arrange the pixel value in ascending order: 0,2,3,3,4,6,19,97

The median value=4(Here no of items=9)

The centre pixel value 97 is replaced by the median value 4 as shown below.

Median filtered Values

4

## *

This illustrates one of the celebrated features of the median filter: its ability to remove 'impulse' noise. The median filter is also widely claimed to be 'edge-preserving' since it theoretically preserves step edges without blurring. However, in the presence of noise it blurs edges in images slightly.

## Synthetic Image:

Let us consider 6*6 window size.

3

1

5

6

9

2

7

3

8

4

5

1

2

4

3

3

2

8

1

8

9

6

2

7

1

9

2

6

5

8

3

5

7

9

8

2

Here, we take 3*3 mask size, to find out the median value.

3

1

5

6

9

2

7

3

8

4

5

1

2

4

3

3

2

8

1

8

9

6

2

7

1

9

2

6

5

8

3

5

7

9

8

2

The order of the pixel value:1,2,3,3,3,4,5,7,8.The median value of this mask size=3.

Here, the centre pixel value 3 is replaced by the median value 3.

Here, we find out the A to P value. First find out the median value for 3*3 mask size and replacing the original centre pixel value by these values.

To find A:

Order: 1, 2, 3,3,3,4,5,7,8.

Median=3.

To find B:

Order: 1, 3, 3,3,4,4,5,6,8.

Median=4.

To find C:

Order: 2, 3, 3,4,4,5,6,8,9.

Median=4.

To find D:

Order: 1, 2, 2,3,4,5,6,8,9.

Median=4.

Similar way, we have to calculate F to P.

To find P:

Order: 2, 4,5,5,5,8,8,9

Median=5.

The final output of synthetic image of "6*6" window.

3

1

5

6

9

2

7

3

4

4

4

1

2

4

4

4

4

8

1

4

4

4

5

7

1

4

4

5

5

8

3

5

7

9

8

2

By checking the synthetic image output by using Matlab

rx = input('Specify the number Rows:');

cx = input('Specify the number of Columns:');

for i = 1:rx

for j = 1:cx

temp = input(['Enter data for array ', num2str(i), ', element ', num2str(j), ':'], 's');

data(i,j) = str2num(temp);

end

end

for i = 1: rx-2 % rx is the rows of size(x)

for j = 1: cx-2 % cx is the columns of size(x)

group=[];

for k = 0:n-1 % mask size

for l = 0:n-1 % mask size

indi = i + k;

indj = j + l;

if (indi > 0 & indi <= rx & indj > 0 & indj <= cx)

group=[group,data(indi, indj)]

end

end

end

sorted=sort(group)

data(i+1,j+1)=median(sorted); % to find the median value of 9 pixel values

end

end

int64(data)

## Output:

3 1 5 6 9 2

7 3 4 4 4 1

2 4 4 4 4 8

1 4 4 4 5 7

1 4 4 5 5 8

3 5 7 9 8 2

Both Hand calculation synthetic image output and Mat lab synthetic image output are same.

## Median Filter Implementation on Mat lab:

In past years, linear filters become the most popular filters in image processing. The reason of their popularity is caused by the existence of robust mathematical models which can be used for their analysis and design. However, there exist many areas in which the nonlinear filters provide significantly better results. The advantage of non linear filters lies in their ability to preserve edges and suppress the noise without loss of details. The success of nonlinear filters is caused by the fact that image signals as well as existing noise types are usually nonlinear.

Due to the imperfection of image sensors, images are often corrupted by noise. The impulse noise is the most frequently referred type of noise. The most cases, impulse noise is caused by malfunctioning pixels in camera sensors, faulty memory locations in hardware, or errors in data transmission. We distinguish two common types of impulse noise. They are Salt-and-Pepper noise and the random valued shot noise. For images corrupted by salt-and-pepper noise, the noisy pixels have only maximum or minimum values. In case of random valued shot noise, the noisy pixels have arbitrary value.

Traditionally, the impulse noise is removed by a median filter which is the most popular non linear filter .A standard median filter gives poor performance for images corrupted by impulse noise with higher intensity. A simple median filter utilizing 3*3 or 5*5 pixel window is sufficient only when the noise intensity is less than approximately 10-20%.

Here, we implement the median filter using Matlab.

## Matlab Coding:

function[y] = MedianFiltering(x,w);

%To find the neighbourhood averaging of mask pixel value

## %

%Synopsis: y=Medianfiltering(x,w)

## %

%Author:Vaseetharan Sivarajah

% Body of the function

subplot(121),imshow(x);

title('Noisy Image');

[rx,cx]= size(x);

y=x;

for i = 1: rx-2 % rx is the rows of size(x)

for j = 1: cx-2 % cx is the columns of size(x)

group = [];

for k = 0:n-1 % 3 * 3 mask size

for l = 0:n-1 %3*3 mask size

indi = i + k;

indj = j + l;

if (indi > 0 & indi <= rx & indj > 0 & indj <= cx)

group = [group, x(indi, indj)];

end

end

end

sorted=sort(group);

y(i+1,j+1)=median(sorted); % to find the median value of 9 pixel values

end

end

subplot(122),imshow(y);

## Output:

The Noisy Image is corrupted by Salt-and-Pepper noise. By using median filter, 3*3 mask size most of noise has been eliminated.

If we smooth the noisy image with larger median filter 7*7 mask size, all the noisy pixels disappear as shown above figure.

## 3.0 Neighbourhood Averaging Filters

Neighborhood averaging filters are similar to mean filters. The Neighborhood averaging filter is the simplest low pass filter; here all coefficients are identical. These filters sometimes are called Averaging filters. The characteristics of neighborhood averaging are defined by kernel height, width and shape. When Kernel size increases, the smoothing effect also increases. The idea behind these filters is straight forward. By replacing the every pixel value in an image by the average of the intensity levels in the neighborhood defined by the filter mask, this process results in an image with reduced "sharp" transitions in intensity levels. The window is usually square, but can be any shape like rectangular, circular, etc. depending on the size of an image.

Each point in the smoothed image, is f(x,y)obtained from the average pixel value in a neighbourhood of (x,y) in the input image.

For example, if we use a 3x3 neighbourhood around each pixel we would use the mask

## Â

1/9

1/9

1/9

1/9

1/9

1/9

1/9

1/9

1/9

Each pixel value is multiplied by 1/9, summed, and then the result placed in the output image. This mask is successively moved across the image until every pixel has been covered. That is, the image is convolved with this smoothing mask (also known as a spatial filter or kernel).

However, one usually expects the value of a pixel to be more closely related to the values of pixels close to it than to those further away. This is because most points in an image are spatially coherent with their neighbours; indeed it is generally only at edge or feature points where this hypothesis is not valid. Accordingly it is usual to weight the pixels near the centre of the mask more strongly than those at the edge.

Some common weighting functions include the rectangular weighting function above (which just takes the average over the window), a triangular weighting function, or a Gaussian.

In practice one doesn't notice much difference between different weighting functions, although Gaussian smoothing is the most commonly used. Gaussian smoothing has the attribute that the frequency components of the image are modified in a smooth manner.

Smoothing reduces or attenuates the higher frequencies in the image. Mask shapes other than the Gaussian can do odd things to the frequency spectrum, but as far as the appearance of the image is concerned we usually don't notice much.

The arithmetic mean is the "standard" average, often simply called the "mean".

\bar{x} = \frac{1}{n}\cdot \sum_{i=1}^n{x_i}

The mean may be confused with the median, mode or range. The mean is the average of a set of values, or distribution; however, for probability distributions, the mean is not necessarily the same as the median, or the mode.

For example:

The observations are [7,5,6,8,1,3,8,5,4].

First, we find out the total value for these observations.

Total=7+5+6+8+1+3+8+5+4=47

Then, finding the average one. Here, number of observations n=9.

Average=total/9.

=47/9

Average=5.22(Equivalent to 5)

So, the average =5.

## 3.1 Synthetic image

Let us consider 6*6 window size.

3

1

5

6

9

2

7

3

8

4

5

1

2

4

3

3

2

8

1

8

9

6

2

7

1

9

2

6

5

8

3

5

7

9

8

2

Here, we take 3*3 mask size, to find out the Neighbourhood averaging value.

3

1

5

6

9

2

7

3

8

4

5

1

2

4

3

3

2

8

1

8

9

6

2

7

1

9

2

6

5

8

3

5

7

9

8

2

The order of the pixel value:1,2,3,3,3,4,5,7,8.The averaging value of this mask size=4.

Here , the centre pixel value 3 is replaced by the averaging value 4.

By using this method, we have to calculate the median value for whole window size 6*6.

3

1

5

6

9

2

7

A

B

C

D

1

2

E

F

G

H

8

1

I

J

K

L

7

1

M

N

O

P

8

3

5

7

9

8

2

Here, we find out the A to P value. First find out the median value for 3*3 mask size and replacing the original centre pixel value by these values.

To find A:

Order:1,2,3,3,3,4,5,7,8.

Averaging=(1+2+3+3+3+4+5+7+8)/9=4.

To find B:

Averaging=(1+3+3+3+4+4+5+6+8)/9.

Averaging=5.

To find C:

Averaging=(2+3+3+4+4+5+6+8+9)/9.

Averaging=5.

To find D:

Averaging=(1+2+2+3+4+5+6+8+9)/9.

Averaging=5.

Similar way, we have to calculate F to P.

To find P:

Averaging=(2+4+5+5+5+8+8+9)/9

Averaging=6.

The final output of synthetic image of 6*6 window

3

1

5

6

9

2

7

4

5

5

5

1

2

5

5

5

5

8

1

5

6

5

6

7

1

4

5

6

6

8

3

5

7

9

8

2

By checking the synthetic image ouput by using matlab

function[y] = NeighbourhoodAveraging(x,w);

%To find the neighbourhood averaging of mask pixel value

## %

%Synopsis: y=n_av(x,w)

## %

%Author:Vaseetharan Sivarajah

% Body of the function

rx = input('Specify the number Rows:');

cx = input('Specify the number of Columns:');

for i = 1:rx

for j = 1:cx

temp = input(['Enter data for array ', num2str(i), ', element ', num2str(j), ':'], 's');

data(i,j) = str2num(temp);

end

end

for i = 1: rx-2 % rx is the rows of size(x)

for j = 1: cx-2 % cx is the columns of size(x)

group=[];

for k = 0:n-1 % mask size

for l = 0:n-1 % mask size

indi = i + k;

indj = j + l;

if (indi > 0 & indi <= rx & indj > 0 & indj <= cx)

group=[group,data(indi, indj)]

end

end

end

sorted=sort(group)

data(i+1,j+1)=mean(sorted); % to find the median value of 9 pixel values

end

end

int64(data)

## ouput:

3 1 5 6 9 2

7 4 5 5 5 1

2 5 5 5 5 8

1 5 6 5 6 7

1 4 5 6 6 8

3 5 7 9 8 2

Both Hand calculation synthetic image output and Mat lab synthetic image output are same.

## 3.3 Neighbourhood Averaging Filter Implementation on Mat lab

function[y] = Neighborhood_Averaging(x,w);

%To find the neighbourhood averaging of mask pixel value

## %

%Synopsis: y=Neighborhood_avg(x,w)

## %

%Author:Vaseetharan Sivarajah

% Body of the function

subplot(121),imshow(x);

title('Noisy Image');

[rx,cx]= size(x);

y=x;

for i = 1: rx-2 % rx is the rows of size(x)

for j = 1: cx-2 % cx is the columns of size(x)

group = [];

for k = 0:n-1 % 3 * 3 mask size

for l = 0:n-1 %3*3 mask size

indi = i + k;

indj = j + l;

if (indi > 0 & indi <= rx & indj > 0 & indj <= cx)

group = [group, x(indi, indj)];

end

end

end

sorted=sort(group);

y(i+1,j+1)=mean(sorted); % to find the median value of 9 pixel values

end

end

subplot(122),imshow(y);

## Output:

The Noisy Image is corrupted by Salt-and-Pepper noise. By using neighbourhood averaging filter, 3*3 mask size most of noise has been eliminated.

If we smooth the noisy image with larger neighbourhood averaging filter 7*7 mask size, all the noisy pixels disappear as shown above figure.

## Histogram Equalization

In Histogram Equalization, the goal is to obtain a uniform histogram for the output image. In other words, the goal of histogram equalization is to distribute the gray levels within an image so that every gray level is equally likely to occur. Histogram equalization increase the brightness and contrast of dark and low contrast image making features observable that were not visible in the image. It also used to standardize the brightness and contrast of image. The process of histogram equalization is the mapping function that maps the input histogram function to the uniformly distributed output histogram function.

Consider for a moment continuous intensity values and let the variable r denote the intensities of an image to be processed. As usual, where r is in the range [0, L-1], with r=0 representing black and r=L-1 representing White. For r satisfying these conditions, we focus on the transformations of the form.

s=T(r) 0â‰¤T(r) â‰¤L-1

The output intensity level is "s" and for every pixel in the image the intensity is "r".

T(r) is a monotonically increasing function in the interval 0â‰¤râ‰¤L-1;

and

0 â‰¤ T(r) â‰¤ L-1 for 0 â‰¤ r â‰¤ L-1.

T(r) is a strictly monotonically increasing function in the interval 0â‰¤râ‰¤L-1.

T(r) is monotonically increasing that output intensity values will never be less than corresponding input values.

(b)Range of output intensities is same as the input. The mapping from s back to r will be one-to-one, thus preventing ambiguities.

Histogram Equalization Action:

The probability of occurrence of intensity level rk in a digital image is approximated by

Pr (rk) =nk/MN where k=0, 1, 2 ..., L-1

Where MN is the total number of pixels in the image, nk is the number of pixels that have intensity rk, and L is the number of possible intensity levels in the image. A plot of pr(rk) versus rk is commonly referred to as a histogram.

The discrete form of transformation is given by

Thus, output image is obtained by mapping each pixel in the input image with intensity rk into a corresponding pixel with level sk in the output image. The T (rk) in this equation is called as Histogram Equalization.

Let us consider 3 bit image (L=8) of size 64*64 pixels (MN=4096) has the intensity distribution shown in Table where the intensity levels are integers in the range [0, L-1] = [0, 7].

rk

nk

pr(rk)=nk/MN

r0=1

790

0.19

r1=2

1023

0.25

r2=3

850

0.21

r3=4

656

0.16

r4=5

329

0.08

r5=6

245

0.06

r6=7

122

0.03

r7=8

81

0.02

By using pr (rk) values, we calculate s values.

To find s0:

s0=7*0.19=1.33

To find s1:

s1=7pr (r0) +7pr (r1)

=7*0.19+7*0.25

= 3.08

To find s2:

s2=7pr (r0) +7pr (r1) +7pr (r2)

=7*0.19+7*0.25+7*0.21

=4.55

To find s3:

s3=7pr (r0) +7pr (r1) +7pr (r2) +7pr (r3)

=7*(0.19+0.25+0.21+0.16)

=5.67

To find s4:

s4=7pr (r0) +7pr (r1) +7pr (r2) +7pr (r3) +7pr (r4)

=7*(0.19+0.25+0.21+0.16+0.08)

=6.23

Similarly, s5=6.65, s6=6.86 and s7=7

At this point, the s values still have fractions because they were generated by summing probability values, so we round them to the nearest integer.

s0=1.33->1 s4=6.23->6

s1=3.08->3 s5=6.65->7

s2=4.55->5 s6=6.86->7

s3=5.67->6 s7=7.00->7

These are values of the equalized histogram. Observe that there are only five distinct levels.

Synthetic image:

Let us consider 6*6 window size.

3

1

5

6

9

2

7

3

8

4

5

1

2

4

3

3

2

8

1

8

9

6

2

7

1

9

2

6

5

8

3

5

7

9

8

2

First, we calculate rk,nk and pr(rk)

The intensity levels are integers in the range [0, L-1] = [0, 9].

Rk

Nk

pr(rk)=nk/MN

r0=0

0

0.0

r1=1

4

0.11

r2=2

6

0.17

r3=3

5

0.14

r4=

2

0.06

r5

4

0.11

r6

3

0.08

r7

3

0.08

r8

5

0.14

r9

4

0.11

Here, MN=6*6=36.

By using these values, we can calculate s values.

To find s0:

s0=9*0.0=0

To find s1:

s1=9pr (r0) +9pr (r1)

=9*0.0+9*0.11

= 0.99

To find s2:

s2=9pr (r0) +9pr (r1) +9pr (r2)

=9*0.0+9*0.11+9*0.17

=2.52

To find s3:

s3=9pr (r0) +9pr (r1) +9pr (r2) +9pr (r3)

=9*(0.00+0.11+0.17+0.14)

=3.78

To find s4:

s4=9pr (r0) +9pr (r1) +9pr (r2) +9pr (r3) +9pr (r4)

=7*(0.00+0.11+0.17+0.14+0.06)

=4.32

To find s5:

s5=s4+9pr (r5) =4.32+9*0.11=5.31

s6=s5+9pr (r6) =5.31+9*0.08=6.03

s7=s6+9pr (r7) =6.03+9*0.08=6.75

s8=s7+9pr (r8) =6.75+9*0.14=8.01

s9=s8+9pr (r9) =8.01+9*0.11=9

At this point, the s values still have fractions because they were generated by summing the probability values, so we round them to nearest integer.

s0=0.0->0 s5=5.31->5

s1=0.99->1 s6=6.03->6

s2=2.52->3 s7=6.75->7

s3=3.78->4 s8=8.01->8

s4=4.32->4 s9=9.00->9

By checking the synthetic image output by using Matlab

rx = input('Specify the number Rows:');

cx = input('Specify the number of Columns:');

for i = 1:rx

for j = 1:cx

temp = input(['Enter data for array ', int2str(i), ', element ', int2str(j), ':'], 's');

data(i,j) = str2num(temp);

end

end

in=rx*cx;

k=1:9; % No of gray level values

g(k)=zeros(1,9); % Sets all gray values to zero initially

for k=1:9;

for i=1:rx;

for j=1:cx;

if data(i,j)==k % Filteration of gray levels of each element in the matrix

g(k)=g(k)+1;

end

end

end

end

k=1:9;

figure(1),subplot(121),stem(k,g,'.'), axis tight

title('Histogram using manual Histogram function');

xlabel('Intensity of the Gray Level');

ylabel('No of Pixels');

for k=1:9

p(k)=g(k)/in;

end

s(1)=9*p(1);

for l=1:8

s(l+1)=9*p(l+1)+s(l);

end

int64(s);

for l=1:8

if(s(l)==s(l+1))

g(l)=g(l)+g(l+1)

end

end

figure(1),subplot(122),stem(int64(s),g);

title('Histogram Equalization using manual function');

xlabel('Intensity of the Gray Level');

ylabel('No of Pixels');

## Coding:

figure(1),imshow(v);

[rx,cx]=size(v);

in=rx*cx;

figure(1),subplot(121),imhist(v),axis tight

k=1:256; % No of gray level values

g(k)=zeros(1,256); % Sets all gray values to zero initially

for k=1:256;

for i=1:rx;

for j=1:cx;

if v(i,j)==k % Filteration of gray levels of each element in the matrix

g(k)=g(k)+1;

end

end

end

end

k=1:256

figure(1),subplot(122),stem(k,g);

title('Histogram using manual function');

xlabel('Intensity of the Gray Level');

ylabel('No of Pixels');

for k=1:256

p(k)=g(k)/in;

end

s(1)=255*p(1)

for l=1:255

s(l+1)=255*p(l+1)+s(l);

end

int64(s)

figure(2),subplot(121),stem(int64(s),g);

title('Histogram Equalization using manual function');

xlabel('Intensity of the Gray Level');

ylabel('No of Pixels');

for i=1:rx

for j=1:cx

for k=1:256

if v(i,j)==k

v(i,j)=s(k);

end

end

end

end

figure(4);

imshow(v);

## 5.0 Edge detection

Edge detection is a problem of fundamental importance in image analysis. In typical images, edges characterize object boundaries and are therefore useful for segmentation, registration, and identification of objects in a scene. In this section, the construction, characteristics, and performance of a number of gradient and zero-crossing edge operators will be presented.

An edge is a jump in intensity. The cross section of an edge has the shape of a ramp. An ideal edge is a discontinuity (i.e., a ramp with an infinite slope). The first derivative assumes a local maximum at an edge. For a continuous image [Graphics:Images/index_gr_1.gif], where x and y are the row and column coordinates respectively, we typically consider the two directional derivatives [Graphics:Images/index_gr_2.gif]and [Graphics:Images/index_gr_3.gif]. Of particular interest in edge detection are two functions that can be expressed in terms of these directional derivatives: the gradient magnitude and the gradient orientation. The gradient magnitude is defined as

[Graphics:Images/index_gr_4.gif]

and the gradient orientation is given by

[Graphics:Images/index_gr_5.gif].

When the first derivative achieves a maximum, the second derivative is zero.

Edge Detection Action:

An example of median filtering of a single 3*3 window of values is shown in figure.

Before edge detected values

6

2

0

3

97

4

19

3

10

Let us consider f(x,y)=97.

Then we calculate the Horizontal Edge

H(x,y)=f(x,y)-f(x+1,y)

=97-3

=94

Calculate Vertical Edge, V(x,y)

V(x,y)=f(x,y)-f(x,y+1)

=97-4

=93

Calculate Positive Diagonal Edge:

M(x,y)=f(x,y)-f(x+1,y+1)

=97-10

=87

Calculate Negative Diagonal Edge

N(x,y)=f(x,y)-f(x+1,y-1)

=97-19

=78

Here, Threshold value set to 40.Then H(x,y)â‰¥40 || V(x,y)â‰¥40 || M(x,y)â‰¥40 ||N(x,y)â‰¥40

f(x,y)=0

Otherwise , f(x,y)=97.

For this example, f(x,y)=0

Edge Detected

0

## *

This illustrates one of the celebrated features of the Edge Detection: its ability to detect the edges of the image.

## 5.1 Synthetic Image

Let us consider 6*6 window size.

3

1

5

6

9

2

7

3

8

4

5

1

2

4

3

3

2

8

1

8

9

6

2

7

1

9

2

6

5

8

3

5

7

9

8

2

For this synthetic image, we assume threshold value=4.

Let us consider the f(x,y)=3 then we calculate H(x,y),V(x,y),M(x,y) and N(x,y)

Horizontal Edge(x,y)

H(x,y)=f(x,y)-f(x+1,y)=3-4=-1

Vertical Edge,V(x,y)

V(x,y)=f(x,y)-f(x,y+1)=3-8=-5

Positive Diagonal Edge:

M(x,y)=f(x,y)-f(x+1,y+1)=3-3=0

Negative Diagonal Edge:

M(x,y)=f(x,y)-f(x+1,y-1)=3-2=0

Then H(x,y)â‰¥Threshold || V(x,y)â‰¥Threshold || M(x,y)â‰¥Threshold||N(x,y)â‰¥Threshold

f(x,y)=0

Otherwise, f(x,y)=9.

For this, f(x,y)=9

Similarly, we calculate the rest of f(x,y).

3

1

5

6

9

2

7

9

0

9

0

1

2

9

9

9

9

8

1

0

0

0

9

7

1

0

9

0

0

8

3

5

7

9

8

2

Checking the output with matlab

function[y] = Edge_detection(x,w);

%To find the Edge detection

## %

%Synopsis: y=Edge detection(x,w)

## %

%Author: Vaseetharan Sivarajah

% Body of the function

rx = input ('Specify the number Rows');

cx = input ('Specify the number of Columns');

for i = 1:rx

for j = 1:cx

temp = input(['Enter data for array ', num2str(i), ', element ', num2str(j), ':'], 's');

v(i,j) = str2num(temp);

end

end

Threshold=4;

for i=2:5

for j=2:5

M(i,j)=v(i,j)-v(i,j+1)

N(i,j)=v(i,j)-v(i+1,j)

H(i,j)=v(i,j)-v(i+1,j+1)

V(i,j)=v(i,j)-v(i-1,j-1)

if (M(i,j)>=Threshold || N(i,j)>=Threshold || H(i,j)>=Threshold || V(i,j)>=Threshold)

v(i,j)=0;

else

v(i,j)=9;

end

end

end

## ouput

3 1 5 6 9 2

7 9 0 9 0 1

2 9 9 9 9 8

1 0 0 0 9 7

1 0 9 0 0 8

3 5 7 9 8 2

By checking the synthetic image ouput by using matlab

## 5.3 Edge Detection Implementation on Mat lab

function[y] = Edge_detection(x,w);

%To find the Edge Detection

## %

%Synopsis: y=Edge_detection(x,w)

## %

%Author:Vaseetharan Sivarajah

% Body of the function

figure(1),subplot(121),imshow(v);

[rx,cx]=size(v);

x=v;

Threshold=40;

for i=2:255

for j=2:255

M(i,j)=v(i,j)-v(i,j+1);

N(i,j)=v(i,j)-v(i+1,j);

H(i,j)=v(i,j)-v(i+1,j+1);

V(i,j)=v(i,j)-v(i-1,j-1);

if M(i,j)>=Threshold || N(i,j)>=Threshold || H(i,j)>=Threshold || V(i,j)>=Threshold

x(i,j)=0;

else

x(i,j)=255;

end

end

end

figure(1),subplot(122),imshow(x);

## 6.0 Conclusion:

This report investigates a new hardware structure of a content based median filter, capable of performing impulse noise removal for gray-scale images. The noise detection procedure takes into account the differences between the central pixel and surrounding pixels of a neighbourhood. From this investigation, it can remove up to 95% of noise from highly corrupted image. The impulse noise (Salt-and -pepper noise) is removed using median filtering technique; the embedded C Code is implemented in order to achieve the clear image. The details inside the image are preserved and the RMSE value is small.

At present it is being used in digital cameras to overcome the noise produced during transmission of data and noise produced due to malfunctioning pixels in camera sensor. In future high responsive median filter will be used which will give more reduced Image and enhancement method will be used to implement a 3D Television.