Filtering Methodologies For Lung Carcinoma Biology Essay


This project is proposed to compare the performance of the filters for different noise removal techniques in Lung Carcinoma tissue images. This provides a detail background information on lung cancer detection .Various filters like Median filters, Adaptive Wiener filters, Gabor filters, and Wavelet thresholding filters are studied which are used to detect nodules in peripherals of lung fields. These filters were implemented on MATLAB platform. The study of these filters helps serving better Computer Aided Diagnosis for lung carcinoma tissue images improving the ability to identify early tumors for successful treatments. Both cancerous and non-cancerous regions appear with little distinction on an X-ray image. For accurate detection of cancerous nodules, we need to differentiate the cancerous nodules from the noncancerous. Various imaging techniques in X-ray, Ultrasound diagnostics, and Magnetic resonance for imaging (MRI) yield the information, which the doctors and radiologists analyze and evaluate broadly in short span of time. The key process of CAD is to develop algorithms that produce more true positives and less false negatives. Generally a few thousand images are necessary for optimization of the algorithm. Additional improvement in the false positive reduction can be obtained by integrating image filtering as a pre-processing step in CAD system. The noise filtration technique is quite robust and there are many extensions to this technique. Its applications include spatial dependent noise filtrations, image enhancement and restoration, edge detection and motion artifact removal.

Lady using a tablet
Lady using a tablet


Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

Lung cancer is dominating cause of cancer deaths in the world. It is important to detect and treat cancer in early stages to improve the survival rate of cancer patients. Usually, the cancer is developed when the lung cells grow at an uncontrollable rate .The abnormal tissue masses inside the lungs are called tumors. There are two types tumors benign (non-cancerous) or malignant (cancerous).The diagnosis of cancer includes X-rays chest films, CT scan, MRI, isotope, bronchoscope. Nevertheless, it is difficult to detect and diagnose lung cancer in chest x-ray images due to presence of many tissues overlapping each other in the X-ray chest film and also there are many objects obscuring the cancer tissue such as ribs, blood vessels and other anatomic structures.

In evolution of computer, Computer Aided Diagnosis (CAD) has become a powerful tool for assisting doctors and radiologists in analysis and interpretation of medical images. Diagnosis of medical images of a patient is not an easy task. The concept of using computers to help and improve the interpretation of medical images is known as Computer-aided diagnosis. . Consider the case of computer tomography for example, CAD systems can help scanning digital images, typical appearances and highlight evident sections, such as possible tumors.

CAD systems seek to highlight suspicious structures. Most CAD systems cannot guarantee 100% detection of pathological tumors. The sensitivity could be up to 90% depending on system used for detection. Whenever there is a correct hit it is known as True Positive (TP), whereas the incorrect prediction of healthy tissue sections is called a False Positive (FP). Lesser the FPs indicated, the higher is the specificity. When the system yields low specificity it reduces the acceptance of the CAD system. The rate of FP's in lung cancer examinations (CAD Chest) could be reduced to 2 per examination. Other segments, like the CT lung examinations the FP rate could be 25 or more.

Detection performance was evaluated with Receiver Operating Characteristic (ROC) Analysis which is an analytical procedure for measuring the accuracy of the system. This characteristic may be used to differentiate between true- positive probability and false-positive probability. The desirable index of accuracy and the appropriate basis for an index of efficiency are provided by the ROC characteristic.

The sensitivity is the other evaluation method for CAD which is the number of the correctly classified suspected nodule area (SNA). Actually the identification is based on the number of corrected diagnosed negative SNAs out of all negative SNAs .The Accuracy is the total number of correctly diagnosed SNAs out of total number of SNAs.

CAD is fairly recent technology which is combination of Artificial Intelligence (AI) and Digital Image Processing (DIP) with radiological image processing typically for detection of tumors. The key process of CAD is to develop algorithms that produce more true positives and less false negatives. Generally a few thousand images are necessary for optimization of the algorithm. These systems in general involve a hierarchical concept, initially applying detailed image preprocessing steps to enhance suspicious areas in the image and then employing morphological and textural analysis to better classify these structures between true abnormalities and false positives.

Lady using a tablet
Lady using a tablet


Writing Services

Lady Using Tablet

Always on Time

Marked to Standard

Order Now

Therefore, CAD (computer aided design) is the most fast and the efficient method for the detection of the lung cancer nodules and provides the better decision making prospect for radiologists. To improve the survival rate of the cancer patients, it is important to detect and treat cancer in the early stages. The nodules in the peripherals of the lung fields are detected by the study of the following filters like median filter, Adaptive wiener filter, Gabor filter and the wavelet thresholding techniques. The above filters helps serving better computer aided diagnosis of lung carcinoma tissue images in which there is a huge ability in the identification of the early tumors for the sake of the successful treatment. Huge amount of the analysis are required because both the cancerous as well as the non cancerous regions appear with the small change on an x ray images or CT images. Therefore, there is a need to differentiate the cancer nodules from the non cancer nodules to overcome the small changes and for the purpose of the accuracy of the cancer nodules detection.

The noise in the image is one of the common concerns in image processing and before any features are extracted of the image, denoising the image should be the basic step. Usually in image processing it is a common methodology to assume noise to be added with zero-mean and constant variance. Although this assumption simplifies the process of filtering and deblurring, results in poor image quality showing the importance of taking (what) into consideration of noise properties. These noises are mainly categorized into Gaussian noise and Impulsive noise.

Why Pre-Processing?

The images obtained from for cancer diagnosis are distributed with, noise, lack of spatial information and low contrast and blurring of image. Particularly 3 filters are tested and investigated .Weiner filter which is a popular denoising filter, Gabor filters for edge detection and Wavelet thresholding filters also have been studied extensively.

There an attempt has been made to recognize the apt filter for pre-processing of the lung carcinoma images. The work is organized as follows .Section 2 will contain the filters used for denoising, Section 3 presents results


Different filters perform several successive independent processing steps which respectively correct noisy images also deblurring and other image adjustments. The basic image processing of a biomedical image includes the following steps.

Image Acquisition


Image Enhancement and Restoration



Feature Selection

Image representation and interpretation

The image preprocessing is one of the most essential techniques for better analysis of the image. Noise the primary concern in image processing before any features are extracted. Usually in image processing it is a common methodology to assume noise to be added with zero-mean and constant variance. Although this assumption simplifies the process of filtering and deblurring, results in poor image quality when taking noise properties into consideration. These noises are mainly categorized into Gaussian noise and Impulsive noise.


Gaussian noise:

Gaussian noise is most common type of noise from contributions of various independent signals. These variations in the intensity are drawn from a Gaussian normal distribution. The most widely used white Gaussian noise is a zero-mean stochastic process. The white Gaussian noise can be described as

White - n(i,j) independent on both space and time

Zero-mean - á¿™(i,j) = 0

Gaussian - n(i,j) is random variable with distribution

p(x) =

Impulsive noise:

Impulsive noise is also termed as salt and pepper noise or speckle noise. This is mainly caused due to transmission or storage errors. The impulsive filter contains random occurrences of black and white pixels. The impulsive noise can be debriefed as

Isp(i,j) =

x,y are uniformly distributed random variables

These noises in image sequences could be removed by using plethora of filtering techniques. Most widely used effective filtering techniques are Adaptive filters, Gaussian filters, Linear and non linear filters.


Filtering is the process if enhancing or modifying an image for underlining particular features or removal of unwanted features. Filtering can be visualized as a neighborhood operation in which any given pixel value in the output image is determined by applying some algorithm to values of the pixels in the neighborhood of corresponding input pixel. These noises in image sequences could be removed by using plethora of filtering techniques. Most widely used effective filtering techniques are Adaptive filters, Gaussian filters, Linear and non linear filters. Linear filtering is the type of filtering in which the linear combination of values of the pixels in the input pixel's neighborhood. These linear filters are good for reducing the mean square error. Non linear filters are used when the signal contains high frequency noise components such as edges and fine details.

Median filter

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

This is a standard technique for noise removal and preserving edges of images. It considers each pixel in the image and by simultaneously verifying the neighboring pixels to decide whether it is representative to the surrounding or not. Instead of simply replacing them with the calculation of mean of the neighboring pixel values , it calculates the median by first sorting all the pixel values from the surrounding neighborhood into numerical order and then replacing the pixel being considered with the middle pixel value. It is very effective in the removal of the noise from the images where less than the half of the pixels in a smoothing neighborhood have been affected. It allows a great deal of the high spatial frequency detail to pass.

In median filters, it is necessary to find all the values from the neighborhood in a numerical order with the fast sorting algorithms. The basic algorithm is somehow enhanced for the purpose of the speed. Whenever the image is set for the thresholding, there may be some interference of the noise due to the presence of the minute grey scale variations in the image which is called the salt and pepper noise. Therefore the median filter is primarily used for the removal of salt and pepper noise from the image without affecting the sharpness of the original image. It offers a great deal to pass a high spatial frequency while effective in removing the noise from the images there by affecting less than half of the image pixels in smoothing neighborhood.

Bright or dark, high-frequency features appearing indiscriminately over the image characterize the impulse noise. Statistically, impulse noise falls well outside the peak of the distribution of any given pixel neighborhood, so the median filter is appropriate to find out where impulse noise is present and remove it by exclusion. The mean is calculated by taking into account the median of a list of sample values and sorting them in any order randomly, and then pick the central value. The median is said to be a good estimator of the peak position. If the distribution has two peaks, or if it is has no central peak, then the median is normally meaningless.

The median filter is a nonlinear digital filtering technique, mostly used to remove noise which is a typical pre-processing trend to enhance the results of later processing (like, edge detection on an image).The extensive use of Median filters in digital image processing is because under certain conditions, it preserves edges at the same time removing the noise from images.

The median filters main ideology is to run through the signal entry by entry, replacing each entry with the median of neighboring entries. A "window" is a pattern of neighbors, which slides entry by entry, over the entire sequence. For one-dimensional signal's, the most apparent window is just the first few previous and next entries, while for two-dimensional or higher-dimensional signals like the images or much complex window patterns are expected. If the window is having an odd number of entries, then the median is just likely to be the middle value after all the entries in the window are sorted numerically.

Like the linear Gaussian filtering, Median filtering is one kind of smoothing technique, and most of the smoothing techniques are effective at removing noise in smooth patches or regions of a signal, but they negatively affect edges. While reducing the noise in a signal is important it is also vital to preserve the edges. Considering small or moderate levels of Gaussian noise, the median filter is evidently better than Gaussian blur at removing noise and also preserving edges for a given, fixed window size. Nonetheless, compared to the performance of Gaussian blur for high levels of noise its performance is relatively less, while, for the salt and pepper noise or impulsive noise, it is particularly effective. This marks the prominence of median filter's wide usage in image processing.

Adaptive Wiener filters

The adaptive Wiener filters are similar to median filters which are applied for the process of de-ionizing adapted on statistics estimated from the local neighborhood of each image pixel. Here we consider the adaptive weighted averaging (AWA) method to approximate the second-order statistics which is essential for the Wiener filter and the resulting Wiener filter is improved by around 1dB in the means of peak-to-peak SNR (PSNR). Moreover, the important feature of this filter is particular enhancement in annoying (may be not right word) boundary noise which is much common in the traditional Wiener filter is greatly suppressed.

The proposed adaptive weighted averaging wavelet Wiener filter described here is better compared to the traditional wavelet Wiener filter by around 0.5dB (PSNR). Images and image sequences are frequently corrupted by noise in the acquisition and transmission phases. The goal of de noising is to remove the noise, both for aesthetic and compression reasons, while retaining as much as possible the important signal features. Very commonly, this is achieved by approaches such as Wiener filtering which is the optimal estimator (in the sense of mean squared error (MSE)) for stationary Gaussian process.

These insights have motivated the design of adaptive Wiener filters, called local linear minimum mean square error (LLMMSE) filters. The LLMMSE filter proposed is extensively used for video denoising is successful in the sense that, it effectively removes noise while preserving important image features. However, this filter suffers from annoying(may be not fight word) noise around edges, due to the assumption that all samples within a local window are from the same group. This assumption is invalidated if there is a sharp edge within the window, for example; in particular, the sample variance near an edge will be biased large because samples from two different groups are combined, and similarly the sample mean will tend to smear. The main problem is how to effectively estimate local statistics.

The amount of smoothing performed by this filter invariantly depends on the mean of local image and variance around the pixel of interest .The Adaptive Wiener filter better preserves the high frequency parts of the image compared to the regular Wiener. Its low-pass characteristics make standard formulation of the Wiener filter's success limited in image processing, which gives rise to unacceptable blurring of lines and edges. The reason why the Wiener filter blurs the image significantly is that a fixed filter is used throughout the entire image i.e. the filter is space invariant. If the signal is a realization of a non-Gaussian process such as in natural images, the Wiener filter is outperformed by nonlinear estimators.

Recently, wavelet-based de noising has attracted extensive attention because of its effectiveness and simplicity. The most common wavelet de noising methods can be classified into two groups: shrinkage and wavelet Wiener. The intuition behind wavelet shrinkage the wavelet transform's effectiveness at energy compaction allows small coefficients to be interpreted as noise, and large coefficients as important signal features.

The wavelet Wiener method is based on the observation, because a natural image can be well modeled in the spatial domain as a NMNV Gaussian random process, from which it follows that the wavelet coefficients are similarly NMNV Gaussian. By properly estimating local means and variances, wavelet Wiener has comparable de noising performance to wavelet shrinkage. Based on the success of AWA based spatial Wiener filtering, we wish to further develop these ideas in the wavelet domain. However, several points should be emphasized. They are as follows:

The mean values of all sub bands above the lowest frequency are very small, and can reasonably be assumed to be zero. The only problem detected with this assumption is that the de noised images suffer from more ripple-like artifacts around edges. Conversely, using an AWA-estimated local mean yields much better edges but leads to structured artifacts in smooth regions. In the present experiments, we use a zero mean assumption, therefore only the local variance is estimated.

2. Although the wavelet transform is an effective decorrelator, there do remain structured correlations among the wavelet coefficients. For example, the horizontal high frequency sub band has much stronger correlation in the horizontal than in the vertical direction. Therefore, the shape of the adaptive window really should be modulated based on some prior understanding of wavelet statistics.

Wavelet Thresholding

Image enhancement is very much important for the better visualization of the object. Therefore, for the preprocessing the removal of the noise from the original image is required.Wavelet thresholding is a signal estimation technique that exploits the capabilities of wavelet transform for signal denoising( first proposed by Donoho). Generally most often spatial filters are used for the purpose of the removal of the noise. These spatial filters are used for the smoothening of the data for the reduction of the noise. Many spatial filters are used for the reduction of the noise from an image. But at the time of the reduction of the noise there is always unwanted blurring effect. But the most optimal method used for the removal of the noise is the wavelet thresholding method that provides excellent performance for the purposes of the noise removal. Thresholding produces a segmentation that yields all the pixels belong to the object or objects of interest in an image.

Medical images are mainly corrupted by noise at the time of acquisition and transmission. The main aim of Image denoising is to remove such additive noise while retaining most of the important input signal (image) features. During recent years, due to the advantages of wavelet thresholding and threshold selection for signal de-noising, a wide research is done in this area because the wavelet provides a suitable basis for differentiating speckle noise from input signal. Here wavelets are used to transform the input signal data into different basis i.e. large coefficients corresponds to important signal features and small coefficients correspond to signal noise. These smaller coefficients can be thresholded without actually affecting the important input signal features.

Thresholding is a simple non-linear technique, which operates on one wavelet coefficient at a time. In Soft thresholding technique each coefficient is compared against the threshold value and if it results in a smaller value then the input is shrunk to zero. Whereas in case of hard thresholding the input is preserved if it is greater than zero or else set to zero. The replacement of the small (noise) coefficients by zero and taking the inverse wavelet transform on the result may lead to reconstruction with the essential signal characteristics and with less noise. In this project, using the mean square error optimality criteria and a near optimal threshold estimation technique for image denoising is proposed by corrupting the test images with additive Gaussian white noise.

Therefore, wavelet thresholding technique is simple, fast and efficient method for the suppression of the corrupting noise while preserving the edges well. Its major advantage is the energy compaction compared to the other spatial domain de-noising filters.

Gabor filter

The Gabor filtering is mostly performed for the purpose of edge detection, which captures the major axis symmetry of a feature at some particular spatial measure. The Gabor filtering is also performed for preserving the edges of an image. This filter is a sinusoidal wave modulated by a Gaussian envelope in the spatial domain.

The impulse response of the Gabor filter is calculated by the multiplication of the harmonic function and the Gaussian function and is also called multiplication convolution property. Due to the multiplication-convolution property (Convolution theorem), the Fourier transform of a Gabor filter's impulse response is the convolution of the Fourier transform of the harmonic function and the Fourier transform of the Gaussian function.

The Gabor filters have both the orientation selective properties and the frequency selective properties therefore it uses the orientation image and the frequency image to filter the normalized image. This can be written as:

h(x,y) = s(x,y)*g(x,y)

Where s(x,y) is complex sinusoid and g(x,y) is two dimensional Gaussian envelope.

g(x,y) =

h(x,y) =

xφ = xcos φ + ysin φ

yφ = -xsin φ + ycos φ

φ is the orientation of the filter .

The Gabor space is often valuable in image processing applications like optical character recognition, iris recognition and fingerprint recognition. Relations between activations for a specific spatial location are very distinctive between objects in an image. Moreover, significant activations can be extracted from the Gabor space in order to create a sparse object representation.

Due to its relevance of the multi-channel filtering it's considered as an excellent preprocessing choice for image registration. Gabor filters enhance the detection of the nodules by punctuating their spatial frequency components and denying other components. These filters are used for orientation responses of simple cells in the primary visual cortex and modeling of the spatial frequency .These filters are also proven to be optimal in the sense of minimizing the joint two dimensional uncertainties in space and frequency plane. Gabor filters are filter banks of band pass filters as they are derived from wavelet based.

Testing Procedure:

These filters were implemented using MATLAB .These filters are tested by adding random white Gaussian noises and measuring the image restoration by the two most common measures i.e. Mean-square Error (MSE) and Peak-Signal-to-noise ratio (PSNR).

The Mean-Square Error is the cumulative squared error between the original and compressed image, while the PSNR is the ratio of peak signal power to the peak signal's noise power and is generally expressed in terms of decibels.

 MSE   =  2

        PSNR =

Where the original image is I(x,y) and the enhanced or approximated image is I'(x,y) and M and N are the image dimensions. To interpret these metrics, lower values of the MSE implies the error is less and it translates to high PSNR value as shown in the inverse relation between the MSE and PSNR. Ideally, the higher the value of the PSNR is good because the Signal to Noise ratio is higher.

The test images used in this project are referred to as Image 1 , Image 2 and Image 3 .

Figure.1 Test Images used.


Out of all the in filtering methods the Wavelet transforms proves to me an excellent choice, due to it localization property. Wavelet denoising attempts to remove the noise present in the signal while preserving the signal characteristics regardless of its frequency content. It removes the noise by killing the coefficients that are insignificant relative to some threshold. The MSE and PSNR from various methods are compared in the Table 1. The comparisons are made with linear filters like Wiener filters .The results prove that the PSNR is worse than the non linear thresholding methods, particularly when σ is large. The image denoising algorithm uses soft thresholding to provide smoothness and better edge preservation both at the same 1time.