Research Proposal Xray Images Enhancement Health And Social Care Essay



INTRODUCTION

1.1 Digital image

A digital image is essentially a two-dimensional array of light-intensity levels, which can be denoted by f(x,y), where the value or amplitude of f at spatial coordinates (x,y) gives the intensity of the image at the point. The intensity is a measure of the relative “brightness” of each point. The brightness level is represented by a series of discrete intensity shades from darkest to brightest, for a monochrome (single color) digital image. These discrete intensity

shades are usually referred to as the “gray levels”, with black representing the darkest level and white, the brightest level. These levels will be encoded in terms of binary bits in the digital domain, and the most commonly used encoding scheme is the 8-bit display with 256 levels of brightness or intensity, starting from level 0 (black) to 255 (white). The digital image can therefore be conveniently represented and manipulated as an N (number of rows) x M (number of columns) matrix, with each element containing a value between 0 and 255 (for an 8-bit monochrome image), i.e.

f(0,0) f(1,0) . . f(0,M-1)

f(x,y)= f(1,0) f(1,1) . . f(0,M-1) ,where 0 ≤ f(x,y) ≤255.

. . . . .

f(N-1,0) f(N-1,1) . . f(N-1,M-1)

Different colors are created by mixing different proportions of the 3 primary colors: red, green and blue, i.e. RGB for short. Hence, a color image is represented by an N x M x 3 three-dimensional matrix, with each layer representing the gray-level distribution of one primary color in the image.

Each point in the image denoted by the (x,y) coordinates is referred to as a pixel. The pixel is the smallest cell of information in the image. It contains a value of the intensity level corresponding to the detected irradiance. Therefore, the pixel size defines the resolution and acuity of the image seen. Each individual detector in the sensor array and each dot on

the LCD (liquid crystal display) screen contributes to generate one pixel of the image. There is actually a physical separation distance between pixels due to finite manufacturing tolerance. However, these separations are not detectable, as the human eye is unable to resolve such small details at normal viewing distance (refer to Rayleigh’s criterion for resolution of diffraction-limited images [1]).

For simplicity, digital images are represented by an array of square pixels. The relation between pixels constitutes the information contained in an image. A pixel at coordinates (x,y) has eight immediate neighbors which are a unit distance away:

(x-1, y-1)

(x-1, y)

(x-1, y+1)

(x, y-1)

(x,y)

(x, y+1)

(x+1, y-1)

(x+1, y)

(x+1, y+1)

Figure 1: Neighbors of a Pixel. Note the direction of the x and y

coordinates used.

Pixels can be connected to form boundaries of objects or components of regions in an image when the gray levels of adjacent pixels satisfy a specified criterion of similarity (equal or within a small difference). The difference in the gray levels of two adjacent pixels gives the contrast needed to differentiate between regions or objects. This difference has to be of a certain magnitude in order for the human eye to identify it as a boundary.

1.2 Image processing

Image processing Is any form of signal processing for which the input is an image, such as a photograph or video frame; the output of image processing may be either an image or, a set of characteristics or parameters related to the image. Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it [2].

Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and signal distortion during processing. Since images are defined over two dimensions (perhaps more) digital image processing may be modeled in the form of Multidimensional Systems.

Many of the techniques of digital image processing, or digital picture processing as it often was called, were developed in the 1960s at the Jet Propulsion Laboratory, Massachusetts Institute of Technology, Bell Laboratories, University of Maryland, and a few other research facilities, with application to satellite imagery, wire-photo standards conversion, medical imaging, videophone, character recognition, and photograph enhancement.

Digital Image Processing consists of several steps. The first step is image acquisition which is acquiring a digital image. When a digital image has been obtained, the next step is important for digital images that is preprocessing. The key function of preprocessing stage is to improve the image in ways that increase the chances for success of the other processes, produce better image quality and reducing noise. The next stage deals with the segmentation of image. Image segmentation partitions input Image into its constituent parts or objects.

The next step is description and representation. Representation is the raw data transformation into a descriptive form suitable for computer processing. Description deals with feature extraction those results. descriptions are necessarily task specific. The last step is recognition. Recognition is the process which assigns a label to an object based on the information of the object. Interpretation assigns meaning to recognized objects.

1.3 Image preprocessing:

Image pre-processing is the term for operations on images at the lowest level of abstraction. These operations do not increase image information content but they decrease it if entropy is an information measure [3] [4] The aim of pre-processing is an improvement of the image data that suppresses undesired distortions or enhances some image features relevant for further processing and analysis task. Image pre-processing use the redundancy in images. Neighboring pixels corresponding to one real object have the same or similar brightness value. If a distorted pixel can be picked out from the image, it can be restorted as an average value of neighboring pixels. Image pre-processing methods can be classified into categories according to the size of the pixel neighborhood that is used for the calculation of a new pixel brightness.

image enhancement is necessary to improve the visual appearance of the image or to provide a better transform representation for future automated image processing such as image analysis, detection,segmentation and recognition [5][6]. To discern the concealed but important information in the images, it is deemed necessary to use various image enhancement methods such as enhancing edges, emphasizing the differences, or reducing the noise .

In this thesis, it will be applied one of enhancement methods on x-ray images to increase both the accuracy and the interpretability of the data.

We know that digital images have enveloped the complete world. The digital cameras which are main source of digital images are widely available in the market in cheap ranges. Sometimes the image taken from a digital camera is not of quality and it required some enhancement. There exist many techniques that can enhance a digital image without spoiling it.

The enhancement methods can broadly be divided in to the following two categories:

1. Spatial Domain Methods

2. Frequency Domain Methods

In spatial domain techniques , we directly deal with the image pixels. The pixel values are manipulated to achieve desired enhancement. In frequency domain methods, the image is first transferred in to frequency domain. It means that, the Fourier Transform of the image is computed first. All the enhancement operations are performed on the Fourier transform of the image and then the Inverse Fourier transform is performed to get the resultant image. These enhancement operations are performed in order to modify the image brightness, contrast or the distribution of the grey levels. As a consequence the pixel value (intensities) of the output image will be modified according to the transformation function applied on the input values.

Image enhancement is applied in many field where images are ought to be understood and analyzed. For example, analysis of images from satellite, medical image analysis, etc.

BACKGROUND

The aim of image enhancement is to improve the interpretability of information in images for human viewers, or to provide better input for other automated image processing techniques.IE has contributed to research advancement in a various fields. Some of the areas in which IE has wide application are mentioned below.

1. Medical imaging [7], [8], [9], uses IE techniques for reducing noise and sharpening details to improve the visual representation of the image. Since minute details play a critical role in diagnosis and treatment of disease, it is essential to highlight important features while displaying medical images. This makes IE a necessary tool for viewing anatomic areas in MRI, ultrasound and x-rays to name a few.

2. In forensics [10], [11], IE is used for identification, evidence gathering and surveillance. Images obtained from fingerprint detection, security videos analysis and crime scene investigations are enhanced to help in identification of culprits and protection of victims.

3. In atmospheric sciences [12], [13], IE is used to reduce the effects of haze, fog, mist and turbulent weather for meteorological observations. It helps in detecting shape and structure of remote objects in environment sensing [14].

4. Astrophotography faces difficulties due to light and noise pollution that can be minimized by IE [15]. For real time sharpening and contrast enhancement several cameras have in-built IE functions. Moreover, numerous softwares [16], [17],allow editing such images to provide better and vivid results.

5. IE techniques has been used In oceanography the study of images reveals interesting features of water flow,sediment concentration, geomorphology and bathymetric patterns to

name a few.These features are more clearly observable in images that are enhanced to overcome the problem of moving targets, deficiency of light and obscure surroundings.

6. IE techniques when applied on pictures and videos help the visually impaired in reading small print, using computers, television and face recognition [18]. Several studies have been conducted [19], [20], that highlight the need and value of using IE for the visually impaired.

7. Virtual restoration of historic paintings and artifacts [21] often employs the techniques of IE in order to reduce stains and crevices. Color contrast enhancement, sharpening and brightening are just some of the techniques used to make the images vivid. IE is a powerful tool for restorers who can make informed decisions by viewing the results of restoring a painting beforehand. It is equally useful in discerning text from worn-out historic documents [22].

8. E-learning field, IE is used to clarify the contents of chalkboard as viewed on streamed video, it helps students in focusing on the text and improves the content readability [23]. Similarly, collaboration [24] through the whiteboard is facilitated by enhancing the shared data and diminishing artifacts like shadows and blemishes.

9. Numerous other fields including, meteorology, microbiology, biomedicine, bacteriology, climatology, microbiology, law enforcement, etc., benefit from various IE techniques. Basically, these benefits are not limited to professional studies and businesses but extend to the common users who employ IE to cosmetically enhance and correct their images.

Inspired by the use of image enhancement in a multitude of fields, this research aims at using these techniques on x-ray images, where The raw data obtained directly from X-ray acquisition device may yield a relatively poor image quality representation.

RESEARCH PROBLEM

The x-ray image enhancement problems can be classified into three main problems:

(1) X-ray images (especially thorax images) include different regions containing details. Both sharp and soft transitions between the regions and details may exist in all visual spans. When all details are enhanced to the same extent, the relatively significant details cover most of the visual span and prevent the visibility of relatively less significant details.

(2) Since X-ray images are used for diagnostic purpose, the image enhancement must not cause misleading information, making a structure looking more or less significant than it is must be avoided.

(3) Data loss is not desirable in diagnostic images. Therefore, the noise attenuation procedure must not remove any visual information.

Another problem with X-ray (especially thorax) images is the risk of incorporating a priori information about the visual structures of the image for enhancement and denoising purpose. Unlike the common images, X-ray images are rendered volume data and the transitions between the same structures may be smooth or sharp depending on the angle.

RESEARCH QUESTIONS

RQ1: Is it possible to enhance x-ray images without losing important details?

RQ2: Is the proposed methods will lead doctors to get a right diagnosis?

RESEARCH OBJECTIVES

The objectives of the study are:

To investigate Image enhancement techniques to improves the qualities of an x-ray image .

To propose a new frame work for x-ray images enhancement.

To provide noise reduction capabilities, with considerably less blurring by using effective filter which is median filter.

To propose method which will increase the sharpening of x-ray images.

To design x-ray image enhancement system based on proposed methods for better diagnosis.

SIGNIFICANT OF STUDY

The goal of image enhancement technique is to improve a characteristics and obtain better quality of an image, such that the resulting image is better than the original image.

The enhancement operations have an important potential in obtaining as much easily interpretable diagnostic information as possible with reasonable absorbed doses of ionising radiation. Due to the increasing usage of high precision and resolution images with a limited number of human experts, the computational efficiency of the denoising and enhancement becomes important.

RESEARCH SCOPE

This research focus on enhancement of x-ray images.

The proposed system will work on x-ray images, whereas the x-ray images have many problems in enhancement operation, because, X-ray images are used for diagnostic purpose, the image enhancement must not cause misleading information, making a structure looking more or less significant than it is must be avoided.

In this research a good enhancement method will used for a better quality of x-ray images.

CONTRIBUTION

The raw data obtained directly from X-ray acquisition device may yield a relatively poor image quality representation. For right diagnosis, we will use enhancement techniques to obtain bitter quality images .

RESEARCH METHODOLOGY

Image enhancement is improving the perception of information in images for human viewers and providing better input for other automated image processing techniques. The main objective of image enhancement is to modify features of an image to make it more suitable for a given task. We will introduce a great deal of subjectivity into the choice of image enhancement methods. There exist many techniques that can enhance a digital image without spoiling it.

Proposed method consists of three steps:

1. Apply Contrast Limited Adaptive Histogram Equalization (CLAHE) on original x-ray image.

2. Apply median filter on contrasted image

3. Create Negative of an Image

9.1 Contrast Limited Adaptive Histogram Equalization (CLAHE)

Adaptive histogram equalization is one of a computer image processing technique .It is used to improve contrast in images. CLAHE is different from ordinary histogram equalization in the respect that the adaptive method computes several histograms, each corresponding to a distinct section of the image, and uses them to redistribute the lightness values of the image. Ordinary histogram equalization purely uses a single histogram for an entire image [2].

Adaptive histogram equalization is an image enhancement technique capable of improving an image's local contrast, bringing out more detail in the image. However, it also can produce significant noise. Contrast limited adaptive histogram equalization is a generalization of adaptive histogram equalization, also known as CLAHE, was developed to address the problem of noise amplification.

The noise problem associated with AHE can be reduced by limiting contrast enhancement specifically in homogeneous areas. These areas can be characterized by a high peak in the histogram associated with the contextual regions since many pixels fall inside the same gray level range. The Contrast Limited Adaptive Histogram Equalization (CLAHE) limits the slope associated with the gray level assignment scheme to prevent saturation. This process is accomplished by allowing only a maximum number of pixels in each of the bins associated with the local histograms. After “clipping” the histogram, the clipped pixels are equally redistributed over the whole histogram to keep the total histogram count identical. The CLAHE process is summarized in Table 1.

The clip limit is defined as a multiple of the average histogram contents and is actually a contrast factor. Setting a very high clip limit basically limits the clipping and the process becomes a standard AHE technique. A clip or contrast factor of one prohibits any contrast enhancement, preserving the original image.

1. Obtain all the inputs:

Image

Number of regions in row and column directions

Number of bins for the histograms used in building image

transform function (dynamic range)

Clip limit for contrast limiting (normalized from 0 to 1)

2. Pre-process the inputs:

Determine real clip limit from the normalized value.

If necessary, pad the image (to even size) before splitting

into regions.

3. Process each contextual region (tile) thus producing gray level

mappings:

Extract a single image region.

Make a histogram for this region using the specified number

of bins.

Clip the histogram using clip limit.

Create a mapping (transformation function) for this region.

4. Interpolate gray level mappings in order to assemble final

CLAHE image:

Extract cluster of four neighboring mapping functions.

Process image region partly overlapping each of the

mapping tiles.

Extract a single pixel, apply four mappings to that pixel, and

interpolate between the results to obtain the output pixel.

Repeat over entire image.

Table 1

9.2 Median Filter

We will use this kind of filter on contrasted x-ray image. In signal processing, it is often desirable to perform some kind of noise reduction on an image or signal. The median filter is one of a nonlinear digital filtering techniques, often used to remove noise from images .Noise reduction is a pre-processing step to improve the results of processing (such as, edge detection on an image). Median filtering is used widely in digital image processing because under certain conditions, it preserves edges whilst removing noise.

The main idea of the median filter is to run through the signal entry by entry, replacing each entry with the median of neighboring entries. The pattern of neighbors is called the "window", which slides, entry by entry, over the entire signal. For 1D signals, the most obvious window is just the first few preceding and following entries, whereas for 2D (or higher-dimensional) signals such as images, more complex window patterns are possible (such as "box" or "cross" patterns). Note that if the window has an odd number of entries, then the median is simple to define: it is just the middle value after all the entries in the window are sorted numerically. For an even number of entries, there is more than one possible median [2].

Advantages of median filte

 They provide excellent noise reduction capabilities, with considerably less blurring than linear smoothing filters of similar size.

 Median filters are particularly effective in the presence of both bipolar and unipolar impulse noise.

 Median value must be one of the pixel values present in the Neighborhood. So median does not create new unrealistic pixel value.

9.3 unsharp mask

An "unsharp mask" is actually used to sharpen an image, contrary to what its name might lead you to believe. Sharpening can help you emphasize texture and detail, and is critical when post-processing most digital images. Unsharp masks are probably the most common type of sharpening, and can be performed with nearly any image editing software (such as Photoshop). An unsharp mask cannot create additional detail, but it can greatly enhance the appearance of detail by increasing small-scale acutance

The sharpening process works by utilizing a slightly blurred version of the original image.  This is then subtracted away from the original to detect the presence of edges, creating the unsharp mask (effectively a high-pass filter). Contrast is then selectively increased along these edges using this mask-- leaving behind a sharper final image.

9.4. ImageJ

ImageJ is a public domain Java image processing and analysis developed at the National Institutes of Health. It runs, either as an online applet or as a downloadable application, on any computer with a Java 1.5 or later virtual machine. Imagej can read many image formats including TIFF, GIF, JPEG,BMP, DICOM, FITS and ‘raw’. It supports ‘stacks’ (and hyperstacks), a series of images that share a single window. It has many tools and menu commands foe easy use. we will use this environment to apply the proposed algorithms on x-ray images.

9.5 Research Database

To evaluate our proposed system ,we need a database. The free database will be take from http://www.imageprocessingplace.com website. It has more than 50,000 x-ray images in different parts of body .the standard of database is JPEG format.

9.6 Evaluation

Almost every X-ray images need to be improved to facilitate access or extract the information important. At the same time this process is sensitive because x-ray images is one the ways for disease diagnosis .adding information to x-ray images leads to the wrong diagnosis. To evaluate our work, after finish the required Implementation will send output data and input data to the experts (doctors and X-rays photographer) .They will give their report on this work.

LITERATURE REVIEW

1. CLASSIFICATION OF IMAGES:

1.1 Intensity Images

An intensity image is a data matrix whose values have been scaled to represent intensities. When the elements of an intensity image are of class unit 8, or class unit 16, they have integer values in the range [0, 255] and [0, 65535].respectively. If the image is of class double, the values are floating-point numbers. Values of scaled, class double intensity images are in the range [0, 1] by convention [25].

1.2 Indexed Images

Array of class logical, unit 8, Unit 16, single, or double whose pixel values are directed indices into a color map. The color map is an m-by-3 array of class double. For single or double arrays, integer values range from [1, p]. For logical, unit8, or unit 16 arrays, values range from [0, p-1]. An indexed image consists of an array and a color map matrix. The pixel values in the array are directed indices into a color map. By convention, this documentation uses the variable name X to refer to the array and map to refer to the color map [25].

1.3 Binary Images

Binary images have a very specific meaning in MATLAB. In a binary image, each pixel

assumes one of only two discrete values: 1 or 0, interpreted as black and white,respectively. A binary image is stored as a logical array. Thus, an array of 0s and 1s

whose values are of data class, say, unit8, is not considered a binary image in MATLAB

[25].

Figure 2. Binary image

1.4 Grayscale Images:

A grayscale image (also called gray-scale, gray scale, or gray-level) is a data matrix whose values represent intensities within some range. MATLAB stores a grayscale image as an individual matrix, with each element of the matrix corresponding to one image pixel. By convention, this documentation uses the variable name I to refer to grayscale images. Array of class unit8, unit16, int16, single, or double whose pixel values. For single or double arrays, values range from [0, 1]. For unit8, values range from [0, 255]. For unit16, values range from [0, 65535]. For int16, values range from [-32768, 32767] [25].

Figure 3. Grayscale image

1.5 True color Images

A true color image is an image in which each pixel is specified by three values one each for the red, blue, and green components of the pixel‘s color. MATLAB store true color images as an m-by-n-by-3 data array that defines red, green, and blue color components

for each individual pixel. True color images do not use a color map. The color of each pixel is determined by the combination of the red, green, and blue intensities stored in each color plane at the pixel‘s location. Graphics file formats store true color images as 24-bit images, where the red, green, and blue components are 8 bits each. This yields a potential of 16 million colors. The precision with which a real-life image can be replicated has led to the commonly used term true color image [25].

Figure 4. Color Image.

2. X-Ray images

The very first X ray device was discovered accidentally by the Germanscientist Wilhelm Röntgen (1845-1923) in 1895. He found that a cathode-ray tube emitted certain invisible rays that could penetrate paper and wood and, the first person in the world to see through human flesh, even saw a perfectly clear outline of the bones in his own hand. Röntgen studied these new rays--which he called x rays--for several weeks before publishing his findings in December of 1895. For his great discovery, he was given the honorarytitle of Doctor of Medicine and awarded the 1901 Nobel Prize for physics. Adamant his discovery was free for the benefit of humankind, Röntgen refused to patent it[26].

X rays are waves of electromagnetic energy which behave in much the same way as light rays, but at wavelengths approximately 1000 times shorter than the wavelength of light. X rays can pass uninterrupted through low-densitysubstances such as tissue, whereas higher-density targets reflect or absorb the X rays because there is less space between the atoms for the short waves to pass through. Thus, an x ray image shows dark areas where the rays traveledcompletely through the target (such as with flesh) and light areas where therays were blocked by dense material (such as bone). Following the discoveryof x rays in 1895, this scientific wonder was seized upon by sideshow entertainers who allowed patrons to view their own skeletons and gave them picturesof their own bony hands wearing silhouetted jewelry.

The most important application of the x ray, however, was in medicine, an importance recognized almost immediately after Röntgen's findings were published. Within weeks of its first demonstration, an x ray machine was used inAmerica to diagnose bone fractures. Thomas Alva Edison invented an x-ray fluoroscope in 1896, which was used by American physiologist Walter Cannon (1871-1945) to observe the movement of barium sulfate through the digestive systemof animals and, eventually, humans. In 1913 the first x-ray tube designed specifically for medical purposes was developed by American chemist William Coolidge. X rays have since become the most reliable method for internal diagnosis.

At the same time, a new science was being founded on the principles introduced by German physicist Max von Laue (1879-1960), who theorized that crystals could be to x rays what diffraction gratings were to visible light. He conducted experiments in which the interference pattern of x rays passing through acrystal were examined; these patterns revealed a great deal about the internal structure of the crystal. William Henry Bragg and his son William LawrenceBragg took this field even farther, developing a system of mathematics that could be used to interpret the interference patterns. This method, known as x-ray crystallography, allowed scientists to study the structures of crystals with unsurpassed precision and is an important tool for scientists, particularly those striving to synthesize chemicals. By analyzing the information within a crystal's interference pattern, enough can be learned about that substance to create it artificially in a laboratory, and in large quantities. This technique was used to isolate the molecular structures of penicillin, insulin,and DNA.

Modern medical x-ray machines are grouped into two categories: "hard" or "soft" x rays. Soft x rays, which operate at a relatively low frequency, are usedto image bones and internal organs and, unless repeated excessively, cause little tissue damage. Hard x rays, very high frequencies designed to destroy molecules within specific cells thus destroying tissue, are used in radiotherapy, particularly in the treatment of cancer. The high voltage necessary to generate hard x rays is usually produced using cyclotrons or synchrotrons (variations of particle accelerators, or atom smashers).

In 1996, Amorphous silicon x-ray detectors were introduced which produce real-time, high resolution images by converting x-rays into light, the light into electrical signals which are interpreted by a computer, which produces digital data displayed as digital images, which can be enlarged to target aspecific area. Images are filmless and instantly available, formatted for electronic storage and/or transmission. First applied to mammography, this technology reduces radiation, cost of film and storage, and can be used in industrial applications. Also in 1996, researchers at NASA's Marshall Space FlightCenter developed the high resolution or high brilliance x ray which generates beams 100 times more intense than conventional x rays. These beams can be controlled and focused by reflecting them through tens of thousands of tiny curved capillaries, much as light is directed through fiberoptics.NASA is using this instrument to define the atomic structure of proteins foruse as blueprints in designing drugs. It may also initiate smaller, less expensive, and safer x-ray sources [26].

3. Image enhancement

Image enhancement is concerned with the sharpening of image features such as edge or contrast, and has been employed to improve the visual appearance of images. A variety of image enhancement approaches have been proposed for medical images, such as histogram equalization [27], unsharp masking [28, 29], etc. These approaches can be generally classified into two categories: global and local (adaptive) enhancements. A global enhancement applies a single transform or mapping to all image pixels, but a local

enhancement uses an individual mapping on the local area of a processing pixel. The global enhancement methods may work well for some images, but bad for most images such as non-uniform or low contrast images.

4. Histogram

Histogram is the graphical representation of various intensities of an image. A Histogram with a small spread has low contrast and a Histogram with a wide spread has a high contrast. An image with its Histogram clustered at the low end of the range is dark and a Histogram with the values clustered at the high end of the range corresponds to a bright image [30].

Histogram can also be modified by mapping functions such as stretch. But these functions will not produce a desired result. Histogram Equalization is used to equalize the image with equal function.

5. Creating an image Histogram

An image histogram is a chart that shows the distribution of intensities in an indexed or

grayscale image. You can use the information in a histogram to choose an appropriate enhancement operation. For example, if an image histogram shows that the range of intensity values is small, you can use an intensity adjustment function to spread the values across a wider range .To create an image histogram, use the imhist function. This function creates a histogram plot by making n equally spaced bins, each representing a range of data values. It then calculates the number of pixels within each range [25].

Figure 6 (a). Gray Scale Image Figure 6 (b). Histogram of images.

6. Histogram equalization

Histogram equalization is the technique by which the dynamic range of the histogram of an image is increased. Histogram equalization assigns the intensity values of pixels in the input image such that the output image contains a uniform distribution of intensities. It improves contrast and the goal of histogram equalization is to obtain a uniform histogram. This technique can be used on a whole image or just on a part of an image.

Histogram equalization redistributes intensity distributions. If the histogram of any image has many peaks and valleys, it will still have peaks and valley after equalization, but peaks and valley will be shifted. Because of this, "spreading" is a better term than "flattening" to describe histogram equalization. In histogram equalization, each pixel is assigned a new intensity value based on the its previous intensity level.

The first histogram equalization methods for image contrast enhancement were proposed

in the early’s seventies [31, 32]. They were based on the principle that the visual contrast

of a digitized image can be improved by adjusting its range of gray-level, so that the histogram of the output image is flat, i.e., a uniform density can be specified from the output histogram. This idea of flattening the image histogram comes from the information theory, which states that the entropy of a signal is maximized when it has a uniform distribution property [33].

[34] used histogram equalization for image enhancement. HE is performed on the input image based on the calculated probability density(or distribution) function .the mean brightness of the input image does not change significantly by HE [34].Additionally, noise is prevented from being greatly amplified [34].However, [35] has improved the global histogram equalizion by using multi-peak histogram equalization combined with local information. Experimental results demonstrated that the proposed method can enhance the images effectively[35].

In addition, Histogram equalization has been applied on Bone Fracture Images [36] .It has given a good contrast image ,but Histogram Equalization may not always provide the desired effect because its goal is fixed- to distribute the gray-level values as even as possible. A disadvantage of the method is that it is indiscriminate. It may increase the contrast of background noise, while decreasing the usable signal. The image after the Equalization was very bright [36].

[37] use histogram equalization with a median filter (3X3 mask). After apply proposed method Minimum Mean Brightness Error Bi-Histogram Equalization(MMBEBHE) on the images , the impulse noises present in the images . To avoid this effect, the enhanced image has been passed through a median filter. the proposed method has given best in contrast enhancement compared to other methods [37].

7. Adaptive histogram equalization

histogram equalization amplifies the image noise and increases visual graininess or patchiness. The global histogram equalization technique does not adapt to local contrast requirements, and minor contrast differences can be entirely missed when the number of pixels falling in a particular gray range is small. Adaptive Histogram Equalization (AHE) is a modified histogram equalization procedure that optimizes contrast enhancement based on local image data. The basic idea behind the scheme is to divide the image into a grid of rectangular contextual regions, and to apply a standard histogram equalization in each. The optimal number of contextual regions and the size of the regions depend on the type of input image, and the most commonly used region size is 8x8 (pixels). In addition, a bi-linear interpolation scheme is used to avoid discontinuity issues at the region boundaries. [38] AHE is able to overcome the limitations of the standard equalization method as discussed earlier, and achieves a better presentation of information present in the image. However, AHE is unable to distinguish between noise and features in the local contextual regions. AHE is A method applied by Wu et al. [39], and was found more effective than the classical histogram equalization when considering the detection of small blood vessels characterized by low contrast levels and intensities that decline significantly with the reduction of the vessels' width. adaptive histogram equalization technique has been proposed [40], which brings a limited improvement, because fixed contextual regions cannot adapt to features of different size. adaptive histogram equalization was compared to the most common clinical method of contrast enhancement, global linear min-max windowing [41]. the results shown indicate that there is very little difference in the ability of global linear min-max windowing and AHE to depict gray-scale contrast in an image. a fast implementation of AHE based on pure software techniques was proposed [42]. Theoretical analysis and experimental results show that it is quite effective. It can be realized in nearly real-time on general PC, which is important for medical image processing. Another proposed method of image enhancement was based upon the logarithmic transform coefficient adaptive histogram equalization using EME as a measure of performance[43]. The performance of this algorithm was compared to a classical histogram equalization enhancement technique. LTAHE has been shown to be a powerful method for enhancing images. This method has advantage of being quick making it simple based on transform adaptive histogram. The results of this technique shows outperform from commonly used enhancement technique like histogram equalization.

8. Contrast limited adaptive histogram equalization

There have been some image contrast enhancement methods. For example, Adaptive Histogram Equalization is an effective method and can enhance the contrast of each region. However, slow speed and the enhancement of noise in relative homogeneous regions are principal problems.To solve these problems, S.M. Pizer [44] proposed a method which is called Contrast Limit Adaptive Histogram Equalization(CLAHE). The CLAHE method applies histogram equalization to a contextual region. Each pixel of original image is in the center of the contextual region. The original histogram is clipped and the clipped pixels are redistributed to each gray level. The new histogram is different from the ordinary histogram, because the intensity of each pixel is limited to a user-selectable maximum. So CLAHE can limit the enhancement of noise.To overcome the noise amplification drawback, a contrast limited AHE (CLAHE) method was proposed in [45]. The noise problem associated with AHE can be reduced by limiting the contrast enhancement specifically to the homogeneous areas of the images.These areas are characterized by a high peak in the histogram associated with the contextual regions, since many pixels fall inside the same gray-level range. A complete description and a implementation of this new method for real-time were presented in [46] and [47], respectively. A wavelet-multi-scale version of CLAHE was also proposed in [48], and applied to improve the contrast of chest CT images.In addition, Contrast limited adaptive histogram equalization was also applied for Bone Fracture Images enhancement[36]. The results have shown that Contrast Limited Adaptive Histogram Equalization gives better result than other Equalization method [36]. In different direction, CLAHE is presented as method to remove fog from video sequences [49]. The result has shown that this method can limit the enhancement of the noise effectively and enhance the details of the images obviously.

9. Filters

The fundamental requirements of noise filtering methods for medical images are safeguarding important information of the object boundaries and detailed structures, ability to efficiently remove noise in the homogeneous regions and the ability to enhance morphological definitions by sharpening discontinuities [50, 51].

Filtering techniques can be broadly categorized into two types, i.e., Linear Diffusion Filtering and Non-Linear Diffusion Filtering. Linear Diffusion filtering can remove the noise efficiently and at the same time can eliminate the semantically useful information. Different from Linear Diffusion Filtering, Non-Linear Diffusion Filtering can reduce the noise while preserving (or even enhancing) important features of the image such as edges. In this case, the Non-Linear Diffusion Filtering can be used for the reduction of the speckle noise in the ultrasound image such as median filter, average filter and wiener filter. Each filtering technique has it s own advantages and disadvantages.

10. Median Filter

Nonlinear image processing techniques have been developed in the last decades, having the advantage of minimizing distortions of informative characteristics [52]. Median filtering technique is one of these non-linear image processing techniques that are widely used for salt and pepper noise removal. It is mostly useful to reduce speckle noise and salt and pepper noise. Its edge-preserving nature makes it practical in cases where edge blurring is undesirable.

A median filter is, as the name suggests, based on finding the median of a set of numbers. As most noise-removing filters, median filters are low-pass filters. A low-pass filter is a filter that passes low-frequency signals, but reduces the amplitude of signals with high frequency. The median filter replaces the current pixel by the median of the neighboring pixels, including the current pixel. The neighborhood can be of any size and shape, but for noise removing it will normally be a square or a disk. As always, a correct choice of parameters is important. The median filter is computer intensive, and has therefore been replaced by the pseudomedian filter in [53]. The problem with the pseudomedian filter is that it can introduce line artifacts and generally gives a poorer result than the median filter [54]. The multi-stage median filter has even better edge-preserving capabilities than the median filter [55], but is more computer intensive. [54] introduces a filter based on nonlinear isotropic diffusion, that performs better on the edge-preserving than the median filter, but is again more computer intensive. His conclusion is that the median filter has very good denoising capabilities and is sufficient in most cases [54], and uses it himself in [56]. [57] use a median filter to smooth the images before border detection, but with a big enough neighborhood to smooth out artifacts such as hair and air bubbles. Median filter has applied on medical images .[37] use a median filter (3X3 mask)with histogram equalization. After apply proposed method Minimum Mean Brightness Error Bi-Histogram Equalization(MMBEBHE) on the images.the impulse noises present in the images . To avoid this effect, the enhanced image has been passed through a median filter. .[37] choose median filter on medical images . The median filters have proved to be good because they have some very interesting properties: 1) they can smooth the transient changes in signal intensity (e.g.,noise); 2) they are very effective for removing the impulsive noises from the signals; 3) they can preserve the edge information in the filtered signal; and 4) they can be implemented by using very simple digital nonlinear operations.With the same way we will use median filter with CLAHE.

11. Unsharp masking

Unsharp masking is a common and simple technique for contrast enhancement. It was used in wet darkrooms and now is frequently used in digital image processing, print and even video to increase the perceived image quality. Unsharp masking can enhance the finest contrasts (the down side is noise enhancement) and can also enhance local contrast (when the kernel has greater spatial extent). The approach provides control over which scale of contrasts to enhance, thus sharpening edges, or enhancing broader contours.

Unsharp-masking is a just one of the many techniques used by photographers, artists, engineers and astronomers to enhance images.[58,59] Unsharp-masking techniques have also been used in the medical field, especially by radiologists.[60,61].

In the darkroom, unsharp masking proceeds by first creating the “unsharp” mask. Explaining its moniker, this mask is the difference of the original image and a blurred (unsharp) version, which leaves only the high-frequency contrasts unmasked. The final contrast enhanced image is made by developing the image with normal contrast and compositing a globally contrast stretched version with the mask. In this way, the contrast-stretched areas appear only at the unmasked high-frequency regions.

As early as 1970, the ability of unsharp masking to create superior renditions was exploited, especially to improve photographs that appeared in newspapers. The main benefit is its simplicity, which meant that the technique could be “incorporated in conventional wirephoto transmitters with very little cost” [62]. The technique adapted the darkroom technique to create electronic masking on a wirephoto transmitter,and was explained as an alterzation in the amplitudes of the sharp and unsharp signals.

Unsharp masking contains at its core a highpass function for determining the enhancement image, also called the contrast signal, and another function for determining the gain values. In standard image editing software, the highpass is performed on the luminance channel or separately for each channel, by subtracting a Gaussian blurred version from the original. The sign of the highpass specifies whether there will be a lightening or darkening along an edge, creating a ‘halo’ whose size is controlled by a user specified Gaussian filter radius. The gain value is a constant user chosen amount,and an additional threshold controls minimum difference which will be enhanced.

The process for creating a contrast enhanced image u(I) is formulated as

u(I) = I+l ʎh(I)

where l is the gain factor that scales the highpass contrast signal h(I).

unsharp masking approaches have already proven useful for colour image processing. Within their framework for extending grayscale image processing techniques to color images. [63] sharpen colour images by adding highpassed versions of both luminance and saturation channels to the original luminance channel. In [64], Thomas et al. improve this technique by limiting artifacts due to noise and sign conflicts between the two enhancing signals according to a local measure of saliency.