Computer Aided Diagnosis For Lung Cancer Biology Essay


Cancer is a disease in which abnormal cells of the body cells divide very fast, and generate too much tissue that forms a tumor. Cancer cells are capable of spreading to other parts of the body through the blood and lymph systems. There are many types of cancers.

When the uncontrolled cell growth occurs in one or both lungs, it is said to be Lung Cancer. Besides, developing into a healthy, normal lung tissue, these abnormal cells continue dividing and form lumps or masses of tissue called tumors. The main function of the lung which is to carry the bloodstream with oxygen to the entire body is disturbed by these tumors.

Types of Lung Cancer

Cancers that begin in the lungs are divided into two major types, non-small cell lung cancer and small cell lung cancer, depending on how the cells look under a microscope. Each type of lung cancer grows and spreads in different ways and is treated differently.

Lady using a tablet
Lady using a tablet


Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

Small cell lung cancer (SCLC)

This is usually believed to be a systemic disease at the time of diagnosis and thus surgery plays no part in the management of this disease.

SCLC staging

Limited disease: It is limited to one hemi thorax that can be integrated in a reasonable field of thoracic radiation therapy.

Extensive disease - It is beyond one hemi thorax or that cannot be incorporated in a reasonable field of thoracic radiation therapy.

Non-small cell lung cancer (NSCLC)

NSCLC aggravates heavily later in its course than SCLC and consequently surgery is the best chance of cure. Patients those who are considered for surgical treatment must be carefully staged to determine tumour resectability. PET will help to assess modal in involvement. But, only 15% of patients are appropriate for resection at diagnosis. The patient must also be carefully assessed pre-operatively to assure fitness for surgery.

Small-cell lung cancer (SCLC) differs from non-small-cell lung cancer in the following ways:

SCLC grows quickly.

SCLC spreads quickly.

SCLC responds well to chemotherapy and radiation therapy

SCLC is frequently associated with distinct paraneoplastic syndromes

Lung cancer is one of the most dangerous cancers in the world, with the smallest survival rate after the diagnosis, with a gradual increase in the mortality rate every year. The survival probability from lung cancer is indirectly proportional to its growth at its detection time. The chances of successful treatment are possible only if the disease is detected at the earlier stage. An estimated result shows that 85% of lung cancer cases in males and 75% in females are caused by cigarette smoking [104]. The overall survival rate for all types of cancer is 63%. Though surgery, radiation therapy, and chemotherapy have been used in the treatment of lung cancer, the five-year survival rate for all stages combined is only 14%. This has not changed in the past three decades [105].

E:\THESIS_ WORK\Gomathi_KONGU\lung-cancer.jpg

Figure 4.1: An example of Lung Cancer

Several thin-sectional CT images are produced in clinic for each patient and are estimated by a radiologist in the traditional sense of looking at each image in the axial mode. Most of the images will be very tough to interpret and consumes lot of time that cause high false-negative rates for detecting small lung nodules, and thus potentially misses a cancer. The fundamental idea of designing a CAD system is to make a machine algorithm acts as a support to the radiologist and points out locations of doubtful objects, so that the overall sensitive rate is raised.

CAD system must achieve following needs

improving the quality and accuracy of diagnosis,

increasing therapy success by early detection of cancer,

avoiding unnecessary biopsies

Reducing radiologist interpretation time [106].

A CAD system for early detection of lung cancer based on an automatic diagnosis of the lung regions included in chest CT images using the neural network is proposed in this chapter. Fuzzy Possibilistic c-mean (FPCM) is used for clustering in the proposed approach.


Various neural network techniques have been employed in the cancer detection approaches. Recently, ANNs is a main research area in health care modeling and it is believed that they will receive extensive application to biomedical systems in the next years [107]. Neural networks learn by examples and so the details of how to recognize the disease is not needed. A set of examples (patterns) is only needed that are representative of all the variations of the particular disease. A high accuracy level in the disease recognition is obtained by carefully choosing the patterns.

Lady using a tablet
Lady using a tablet


Writing Services

Lady Using Tablet

Always on Time

Marked to Standard

Order Now

Artificial neural networks (ANN)

These are basic models of the biological nervous system and are inspired from the kind of computing executed by a human brain. An ANN is an extremely parallel distributed processing system made up of highly interconnected neural computing elements that urn the ability to learn and thus acquire knowledge and make it available for use [108]. The data obtained by electrical impedance spectroscopy has a strong relation with soft computing in identifying cancerous area from the normal area. So ANN which is an information processing system can be used as an appropriate tool for the cancer detection.

Certain performance characteristic of ANN is common with biological neural networks. An ANN contains some nodes which are connected through weights. Each node obtains data from behind nodes, attaches it and passes data via a nonlinear function, and then propagates data to proceeding nodes. ANN performance is in two phases:

training phase

test phase

The input patterns are offered to the ANN and weights are adjusted and fixed to learn these patterns in the training phase. ANN certainly learns input patterns in learning phase. On the other hand, the patterns which are not used in training phase are presented to the ANN in the test phase and the ANN's outputs are used to estimate its performance [109]. If the performance of ANN is satisfactory, it can be used in its own specific application.

Artificial Neural Network Structures

Neural Networks have been widely used in various applications like pattern classification, pattern completion, function approximation, optimization, prediction and automatic control. ANNs are classified into two categories like supervised and unsupervised learning. ANN is supervised only if the outputs of the input patterns used in the training phase of the ANN are available through a particular experiment, and otherwise it is unsupervised.

Supervised ANNs can also be categorized into two groups viz error-based and prototype-based. The main aim of error-based network is to reduce the cost function which is defined on the basis of error between the desired output and the network output. The main aim of the prototype-based network is to reduce the distance between the inputs patterns and the prototypes which are assigned to each cluster.

The Multilayer Perceptron (MLP) and Radial Basis Function (RBF) networks are the examples of/for error-based networks and the Linear Vector Quantization (LVQ) is the example for prototype based network.

Multilayer Perceptron (MLP) network is one of the important supervised neural network structures. It is a feed-forward layered network with one input layer, one output layer, and some hidden layers [109]. The MLP training is based on the minimization of a suitable cost function, and is called the back propagation algorithm. The first version of this algorithm based on the gradient descent technique was proposed by Werbos [110] and Parker [111].

The fundamental construction of a Radial Basis Function (RBF) network constitutes three layers with entirely different roles. The inputs layer consists of source nodes that connect the network to its environment. A nonlinear transformation from the input space to the hidden space is applied in the second layer; in most applications the hidden space is of high dimensionality. The output layer is linear, providing the response of the network to the activation pattern applied to the input layer [112-113].

Linear Vector Quantization (LVQ) was introduced by Linde et al. [114] and Gray [115]. It was initially used for image data compression and later was adapted by Kohonen [116] for pattern recognition. The fundamental idea is to divide the input space into number of distinct regions, called decision regions.

Simplified Fuzzy Art map (SFAM), the abridged model of fuzzy adaptive resonance theory, is a prototype-based network which can handle both binary and analogue data in a supervised manner. In addition to the high practical potential of the SFAM network, its intricacy prevents the others from using it.

These four different ANN structures are to predict the malignancy of the different cancers.

Wavelet Neural Network

Multilayer perceptron (MLP) along with the back propagation learning algorithm is the most popular type of ANN among all in practical situations [117]-[118]. However, disadvantages of an MLP are

difficulties in reaching the global minimum in a complex search space

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

time-consuming and

failure to converge when high nonlinearities exist,

These limitations have deteriorated the accuracy of its application. To overcome the deficiencies of an MLP, a Wavelet Neural Network (WNN) has been introduced as a vital alternative to the MLP [119]. Wavelet families are integrated as the activation function in the hidden layer of WNNs. There are several issues that are concerned with WNNs, varying from different learning algorithms, network architecture, type of activation functions used in hidden layer and also the parameter initialization.

A proper initialization of the network parameter is a key factor to achieve faster convergence rate and higher accuracy rate. Approaches of using an explicit expression, hierarchical clustering, support vector machine, genetic algorithm and K-Means clustering are among the approaches that have been implemented in the parameter initialization [120, 121]. Various clustering algorithms, namely, K-Means (KM), Fuzzy C-means (FCM), symmetry-based K-Means (SBKM), symmetry-based Fuzzy C-means (SBFCM) and modified point symmetry-based K-means (MPKM) clustering algorithms are available in initializing the WNN translation parameter. These various clustering algorithms can be integrated into the WNN and applied in a real world application, where the classification problem of heterogeneous cancer using the microarray data is main concern.

Probabilistic Neural Network (PNN)

PNN was developed by Specht [122] [123]. This provides a common solution to pattern classification problems by following the probabilistic approach based on the Bayes formula. The Bayes decision theory emerged from his formula takes into account the relative likelihood of events and uses a priori information to improve prediction. Parzen estimators are used by the network model to attain the corresponding probability density functions (p.d.f.) to the classification categories. Parzen [124] showed that classes of p.d.f. estimator asymptotically approach the fundamental density function, provided that it is continuous. Cacoulos [125] extended Parzen's approach to the multivariate case.

A supervised training set is used by PNN to develop probability density functions within a pattern layer. Training of a PNN is much simpler than other ANN techniques. Key advantages of PNN are that training needs only a unique pass and that the decision hiper- surfaces are guaranteed to approach the Bayes-optimal decision boundaries as the number of training samples grows. On the other hand, the main limitation of PNN is that all training samples must be stored and used in classifying new patterns. But, in order to decrease the computational cost, dimensionality reduction and clustering approaches are usually applied, previous to the PNN construction.

The PNN-based decision approach was applied to categorize a group of individuals into certain categories of diagnosis in the area of cancer diseases.


A new method called automatic Computer-Aided Diagnosis (CAD) system is presented. This system is used for early detection of lung cancer by analyzing chest 3D computed tomography (CT) images. In the first stage of this CAD system pure basic image processing techniques is used to extract lung regions. The extracted lung regions in each slice are segmented using Hopfield Neural Networks (HANN) and it show good segmentation results in a short time. Fuzzy Possibilistic C-Means (FPCM) algorithm is presented that incorporates spatial information into the membership function for clustering.

Lung Regions Extraction

The main limitations of the earlier gray level thresholding techniques are the problem of selecting suitable and accurate threshold values. Moreover, some approaches, as in [126], need a post processing step to compensate the lost parts that may occur as a result of using the thresholding technique. To overcome the problems of the thresholding methods, a new method is proposed in this chapter for the automatic extraction of lung regions based on one of the different features of the raw data obtained using the bit-plane slicing technique. The extraction approach described in this section is fully automatic and depends on a set of basic digital image processing techniques adapted to the CT data. The primary results obtained from the proposed algorithm to a 3D dataset consisting of 2668 2D CT images from 11 individuals have been significant. A CT image of chest consists of different regions such as the background, lung, heart, liver and other organ areas. The goal of lung region extraction step is to separate the lung regions, our regions of interest (ROIs), from the surrounding anatomy structures.

Lung Regions Extraction

Segmentation of lung region using FPCM

Analysis of segmented lung region

Formation of diagnosis rules

Testing and Evaluation

Figure 4.2: The Lung Cancer Detection System

Figure 4.2 clearly explains the proposed method for the extraction of the lung regions from 3D CT chest image. Initially, the bit-plane slicing algorithm [127] is applied to each 2D CT image of the raw data. The resulting binary slices are then analyzed to choose among them the best bit-plane image that may help in extracting the lung regions from the raw CT-image data with a certain degree of accuracy and sharpness.

Original Image

Extracted lung

Bit-Plane Slicing


Median Filter



Lung Border Extraction

Flood Fill Algorithm

Figure 4.3: The proposed lung regions extraction method

To refine the chosen bit-plane image, other approaches were used for different purposes in a sequence of steps. The main purpose of Erosion, median filter and dilation steps is to eradicate irrelevant details that may add extra difficulties to the lung border extraction process. The main goal of the outlining step is to extract the structure's borders. The main purpose of lung border extraction step is to separate lung structure from all other uninteresting structures. Finally, in order to fill the extracted lung regions with their original intensities, a stack-based flood-fill technique is used. Figure 4.4 shows the results of applying step by step the proposed lung regions extraction method to a given CT image.

D:\paper\Lung_Cancer_Classification\lung cancer\s.jpg

Figure 4.4: Lung regions extraction algorithm: a. original CT image, b. bit-plane-2, c. erosion, d. median filter, e. dilation, f. outlining, g. lung region borders, and h. extracted lung.

Lung Regions Segmentation

After extracting the lung regions successfully from the raw CT images, as described in the previous section, the second step of the proposed CAD system is lung regions segmentation that aims to segment the extracted lung regions searching for cancerous cell candidates -the new region of interests (ROIs). A huge number of candidates are chosen with large number of non-cancerous candidates or false positives and a few numbers of cancerous candidates. ANNs are well-known approaches used for many purposes and in many applications. ANNs are application independent and work fine with most of the applications. The proposed approach uses the ANNs to solve the lung regions segmentation problem.

There are various ANN techniques available which can be used for this proposed approach. But Hopfield Neural Network is used in this due to its significant performance.

Hopfield Artificial Neural Network (HANN)

Hopfield Neural Network (HANN) is one of the ANN, which has been used in many the literatures for different purposes. The main use of Hopfield Neural Network in medical image processing field is its use for classification of Magnetic Resonance (MR) images of the brain based on energy minimization as described in [128, 129]. The performance of the HANN is found to be significant. The algorithm is enhanced to overcome some of the problems such as considering the minimization of the sum of squared errors and ensuring the convergence of the network in a pre-specified period of time.

Figure 4.5: Architecture of HANN

The improved version of the HANN used in [129] is used for MR images of the brain. The same algorithm is used for the segmentation of the extracted lung regions. Then the extracted lung regions segmentation problem is formulated as a minimization of an energy function constructed of a cost-term as a sum of squared errors. In order to guarantee the convergence of the network, the minimization is achieved with a step function permitting the network to reach its stability in a pre-specified period of time.

The HANN architecture consists of a single layer representing a grid of N x M neurons with each column representing a class and each row representing a pixel. All neurons work as both input and output neurons simultaneously. In fact neurons under each class hold the probability that the corresponding pixel belongs to this class. N is the size of the given image and M is the number of classes that is given as a priori information. The network is designed to classify the feature space without teacher based on the compactness of each class calculated using the distance measure (Rkl) between the kth pixel and the centroid of class l. The problem of segmentation is formulated as a partition of N pixels among M classes such that the assignment of the pixels reduces the cost-term of the energy (error) function:


Where Rkl represents the distance measure between the kith pixel and the centroid of class l, and defined as follows:


Where Xk is the feature value (intensity value) of the kth pixel and Xl is the centroid value of class l, and defined as follows:


Where nl is the number of pixels in class l. Considering case n=2 which means the energy is defined as sum-squared error, Vkl is the output of the klth neuron. This approach adopted the winner-takes-all learning rule, where the input-output function for the kth row (to assign a label m to the kth pixel) is given by:


The minimization achieved by using Hopfield neural network (HANN) and by solving a set of motion equations satisfying:


Where Ui and Vi respectively represent the input and output of the ith neuron, μ (t) represents scalar positive function of time, which determines the length of the step to be taken in the direction of the vector d = −

E(V ) . The suitable selection of the step μ (t) is something of an art, experimentation and a familiarity with a given class of optimization problems are often required to find the best function [129]. It is found that the μ (t) function used in [129] for segmenting the MR data using HANN is used in this approach and works fine for segmenting the CT data using the HANN too:


Where t represents the iteration step and is the pre-specified convergence time. HANN segmentation algorithm can be summarized in the following steps:

Initialize the input of neurons to random values.

Apply the input-output function (Vkl) defined above, to obtain the new output values for each neuron, establishing the assignment of pixels to classes. The class membership probabilities grow or diminish in a winner-takes-all style as a result of contention between classes. In winner-takes-all model, the neuron with the highest input value fires and takes the value 1, and all remaining neurons take the value 0.

Compute the centroid (Xl) as defined above, for each class l.

Compute the energy function (E) as defined above,

Update the inputs (Ui) using the following equation, learning occurs; when neuron input weights are adjusted in an attempt to reduce the output error.


Repeat from step 2 until t = Ts. This process iteratively modifies the pixel label assignments to reach a near optimal final segmentation map.

HANN Segmentation Results

HANN with the specifications mentioned above is applied to each of the extracted lung regions for the whole data set and maintain the results for further processing in the following steps. HANN segmentation results are accurate and homogeneous. In addition to that, HANN takes short time to achieve the desired segmentation results. HANN needs less than 120 iterations to reach the desired segmentation results (i.e. about 9 seconds on average).

Fuzzy Possibilistic C Mean (FPCM)

FPCM is a clustering algorithm that combines the characteristics of both fuzzy and possibilistic c-means. Memberships and typicalities are very vital for the correct feature of data substructure in clustering problem. Thus, an objective function in the FPCM depending on both memberships and typicalities can be shown as:


With the following constraints:



A solution of the objective function can be obtained via an iterative process where the degrees of membership, typicality and the cluster centers are update via:




FPCM produces memberships and possibilities simultaneously, along with the usual point prototypes or cluster centers for each cluster. FPCM is a hybridization of Possibilistic C-Means (PCM) and Fuzzy C-Means (FCM) which provides solution to various problems.

The advantages of the FPCM method are the following:

Provides regions more homogeneous than other techniques

it reduces the spurious blobs

it removes noisy spots

It is less sensitive to noise than other techniques.

Features Extraction and Formulation of Diagnostic Rules

The segmentation results are obtained and the approach start by initial cancerous candidate objects or nodules that represent all the members of one of the classes resulting from the HANN segmentation algorithm. Considering the members of the class with the least number of members as the initial cancerous candidate objects and all the members of other classes are considered. Then, different features are extracted to use them in the following diagnostic step, where some diagnostic rules are formulated to remove a huge number of false candidates that usually results from the segmentation step.

Feature Extraction

The features used in the diagnostic rules are obtained from the literature:

Area of the candidate region

The Maximum Drawable Circle (MDC) inside the candidate region

Mean intensity value of the candidate region.

It is found that, the above features are suitable to achieve accurate diagnosis experimentally. Thus, the first feature (the area of the candidate region or object) is used to:

Eliminate isolated pixels (seen as noise in the segmented image).

Eliminate very small candidate object (Area is less than a thresholding value).

This feature generally eliminates a good number of extra candidate regions that do not have a chance to form a nodule; moreover it also reduces the computation time needed in the next diagnostic steps.

The second feature is to represent each candidate region by its corresponding MDC. This method begins to draw a circle starting from a point inside a candidate region or object. This circle should fulfill the condition that all the pixels inside the circle belong to the object in process. All the pixels inside the object are considered as starting drawing point. The process starts to draw one-pixel radius size circle starting from a point inside the candidate region. If the process succeeds, the radius is increased by one pixel and tries to redraw the circle. This process is repeated until the last radius exceeds the border of the region. The radius size of the previous drawing is recorded that fulfills the condition that all the pixels inside the circle belong to the object in process as MDC. The process is repeated to cover all the candidate objects. Finally, each candidate object saves its maximum drawable circle to be used in the diagnostic process to eradicate more and more false positive cancerous candidates. The drawing of the circle is simulated inside the object process by examining the eight neighbors of the starting point. If all of neighboring pixels are belonging to the same object as the starting point object, the drawing process succeeds, which means that one-pixel radius size is achieved. For the two-pixel radius size investigate the 24-neighbours, and so on.

The third feature is the mean CT intensity value of the candidate region and is used to eradicate more regions that do not have features of cancerous cells. The mean intensity value represents the average intensity value of all the pixels that belong to the same region (object) and is calculated as follows:


where j denotes the object index and ranges from 1 to the total number of candidate objects in the whole image. Intensity (i) denotes the CT intensity value of pixel i, and i ranges from 1 to n, where n represents the total number of pixels belonging to object j.

Formulation of Diagnostic Rules

The abovementioned features are extracted and then some diagnostic rules are formulated to use them in the proposed CAD system.

Rule 1: When the area of the object is below the threshold value T1 for each candidate object, then it is deleted from the candidate list. When this filter is applied, it has the effect of reducing the number of false positives that exist in the initial candidate objects. This decision may reduce the computation time needed in the following diagnostic rules.

Rule 2: If the value of MDC of this object is below the threshold value T2, then it is deleted from the candidate list. T2 is chosen to be 2- pixels radius size, thus any candidate objects with MDC less than 2-pixels radius size should be removed as it is away from being a nodule, and very close from being blood vessel. This rule is based on the medical fact that true lung nodules show certain circularity especially small lung nodules. When this filter is applied, it has the effect of removing large number of vessels, which in general have a thin oblong, or line shape.

Rule 3: For each candidate object, if the value of the mean intensity of this object lies outside a particular range, i.e. between T3 and T4, then it is deleted from the candidate list. The right values are chosen for both thresholds T3 and T4 based on Medical information and experimentation. The proposed approach used the values of -9000 CT-intensity value for the threshold T3 and -12500 CT intensity value for the threshold T4. The filter has the effect of removing further more false positives.

After all the filters are applied, very small numbers of cancerous candidate objects are present. The CAD system marks all the remaining candidates as possible cancerous regions. Then the images related with these regions should be reported and displayed to radiologists to take their final decision. This implies that, the purpose of the proposed CAD system is not to replace radiologists; but to assist radiologists and provide them with a tool that may help them in detecting lung cancer at early stages by alerting them to possible abnormalities. Moreover, the proposed approach aims at improving the accuracy of detection and minimizing the time spent by radiologists in analyzing vast number of slices per patient (more than 300).


This chapter discuss about an automatic CAD system for early detection of lung cancer by analyzing raw chest CT images. The approach starts by extracting the lung regions from the CT image using several image processing techniques, including bit plane slicing, erosion, median filter, dilation, outlining, and flood-fill algorithm. A novel approach of using bit-plane slicing technique is introduced instead of the thresholding technique that is used in the first step in the extraction process to convert the CT image into a binary image. Bit-plane slicing technique is both faster and data- and user-independent compared to the thresholding technique. After the extraction step, the extracted lung regions are segmented using Fuzzy Possibilistic C Mean (FPCM) algorithm. The HANN algorithm shows homogeneous results obtained in a short time. Then, the initial lung candidate nodules resulting from the HANN segmentation are analyzed to extract a set of features to be used in the diagnostic rules. These rules are formulated in the next step to discriminate between cancerous and non-cancerous candidate nodules. This technique is a powerful method for noisy image segmentation and works for both single and multiple-feature data with spatial information. The extracted features in the proposed system are: the segmented lung regions, the maximum drawable circle (MDC) inside the region and the mean pixel intensity value of the region.

The next chapter deals with the next proposed approach called "A Computer Aided Diagnosis System for Lung Cancer Detection Using Support Vector Machine".