Decision Support System For Diabetes Retinopathy Biology Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Prolonged duration of the diabetes may affect the tiny blood vessels of the retina causing Diabetic Retinopathy. Routine eye screening of diabetes subjects will help to detect DR at the early stage. It is very laborious and time consuming for the doctors to go through many fundus images continuously. Therefore, decision support system for DR detection can reduce the burden of the ophthalmologists. In this work, we have used Discrete Wavelet Transform (DWT) and Support Vector Machine (SVM) classifier for automated detection of normal and DR classes. The Wavelet based decomposition was performed up to the second level and eight energy features were extracted. Two energy features from the approximation coefficients of two levels, and six energy values from the details in three orientations (horizontal, vertical, and diagonal) were evaluated. These features were fed to the SVM classifier with various kernel functions (linear, radial basis function, polynomial of order 2 and 3) to evaluate the highest classification accuracy. We obtained the highest average classification accuracy, sensitivity and specificity of more than 99% with SVM classifier (polynomial kernel of order 3) using three DWT features. We have also proposed an integrated index called Diabetic Retinopathy Risk Index (DRRI) using clinically significant wavelet energy features to identify normal and DR classes using just one number. We feel that, this (DRRI) can be used as an adjunct tool by the doctors during the eye screening to cross check their diagnosis.

Keywords: Eye, Diabetes retinopathy, SVM, DWT, classifier

* Corresponding author: Department of Electronics and Communications

Manipal Institute of Technology, Manipal University,

Manipal 576104Karnataka,India

Email: kevinkurkal@yahoo.co.in

1. INTRODUCTION

Diabetic retinopathy (DR) is a damage of the retina of the eye due to prolonged diabetes. It is a vascular disorder affecting the microvasculature of the retina in the human eye. The disease progresses with time providing no noticeable symptoms until the damage has occurred [1]. It is an ocular manifestation of systemic disease which affects up to 80% of all patients who have had diabetes for 10 years or more. Despite these intimidating statistics, research indicates that at least 90% of these new cases could be reduced if proper, vigilant treatment and monitoring of the eyes on a regular basis is conducted [2].  

The normal retinal image is shown in Fig. 1(a). It has clear blood vessels, optic disc the bright circular area in the eye. It carries neurons from eye to the brain. Macula is the dark spot in the eye, which helps in detailed central vision.

During the diabetes retinopathy (DR) the tiny blood vessels of the retina leak blood and fluid on the retina, forms features such as microaneurysms [MA], hemorrhages, hard exudates, cotton wool spots or venous loops [3,4]. Nonproliferative diabetic retinopathy (NPDR) and proliferative diabetic retinopathy (PDR) are the two broad classes of DR [4].

The diabetes retinopathy (DR) has four stages [4, 5]:

Mild nonproliferative retinopathy: At least one microaneurysm with or without the hemorrhages, hard exudates, cotton wool spots or venous beading will be present. It was reported that, approximately 40 percent of people with diabetes have at least mild signs of DR [6].

Moderate nonproliferative retinopathy: Many microaneurysms, hemorrhages, cotton wool spots and venous beading may be seen. Sixteen percent of the moderate NPDR patients may develop PDR within one year [7].

Severe nonproliferative retinopathy: Follow any one of the three features (4-2-1 rule): (i) presence of many hemorrhages and microaneurysms in all four retinal quadrants (ii) two or more quadrants has venous beading (iii) At least one quadrant has Intraretinal microvascular abnormalities (IRMA). There is 50% chance of progression of subjects with severe NPDR to PDR in one year. Patients with 2 or more of the above cited features are graded as very severe NPDR [7].

Proliferative retinopathy: In this advanced stage the signals sent by the retina for nourishment trigger new thin fragile blood vessels. The leakage of these blood vessels may result in severe vision loss and even blindness. About 3 percent people in this condition may result in severe visual loss [6].

Fig.1 shows the typical fundus images of normal, mild NPDR, moderate NPDR, severe NPDR and PDR classes.

Exudate

Hemorrhage

Blood vessel

Macula

Optic disc

Fig. 1 Typical fundus images at the different stages of DR: (a) Normal (b) Mild NPDR (c) Moderate NPDR (d) Severe NPDR (e) PDR.

Retinal image analysis provides an opportunity to understand the natural development and treatment of the eye disorders such as DR and glaucoma. Commonly categorized anatomical structure features include optic disk, blood vessels, microaneurysms, hemorrhages, cotton wool spots and exudates [8].Accurate detection of these features from the fundus images will help to improve the accuracy of automated detection of DR.

Many researchers have proposed various methods for automatic identification of the features for the detection of DR. Diameter of blood vessels was used as a measure to detect DR [9]. The blood vessels were detected based on the regional recursive hierarchical composition using Quad trees and post-filtration of edges. Yellow and red colored lesions were used classify the normal and DR fundus images using a automated fundus photographic image-analysis algorithm [10]. Neural network classifier was used [11] to classify the normal and DR images based on the detection of the arteries and veins. Optic disk, bright lesions, and dark lesions along with the arteries and veins were used for the early detection of DR [12]. Identification of microaneurysms was performed using bilinear top hat transformation method [13]. Mathematical morphology based techniques [14] were used to detect the hemorrhages. A recursive region-growing technique with Moat operator [15] was proposed for detection of MAs, hemorrhages, exudates and cotton wool spots. The above discussed methods are mainly useful in analysis of the specific features of the fundus images.

Various methods have been used to classify the fundus images based on techniques such as higher-order spectra (HOS), mathematical morphology and textural features. Acharya et al. [16] proposed a new technique for the automatic detection of normal, mild DR, moderate DR, severe DR, and PDR stages using the bispectral invariant features of higher-order spectra techniques and SVM classifier. They reported the classification accuracy of 82%.

Based on the mathematical morphology techniques, the fundus images were classified as normal and abnormal depending on the number, location, and type of discrete micro vascular lesions in the fundus of the eye [10]. The sensitivity and specificity of 96.7% and 71.4%  respectively was obtained using an automated fundus photographic image-analysis algorithm. Wong et al. [17] have classified normal, mild DR, moderate DR, severe DR, and PDR stages using morphological image-processing techniques and a feed forward neural network. In their work, the area and perimeter of the RGB components of the blood vessels are chosen as the features for the classifier. Morphological techniques were used to compute the area of the exudates, blood vessels, along with texture parameters to classify the fundus images into normal, NPDR, and PDR [5]. The drawback of morphological techniques is the accurate use of structuring element.

Acharya et al [18] have used the statistical texture features such as homogeneity, correlation, short run emphasis, long run emphasis, and run percentage to classify the fundus images into normal, NPDR, PDR and Macular Edema (ME].The authors could get an average classification accuracy of 85.2% for normal and different classes of DR. Although the texture based feature extraction techniques have proven to be successful, it is still a challenge to generate the features that retrieve generalized structural features from the retinal images.

In this work, we have used discrete wavelet transform (DWT) method to extract the important features. The detailed coefficients of the DWT can capture the subtle variations (blood vessels, hemorrhages, microaneurysms and exudates) in the image successfully.

The proposed system is shown in Fig. 2. In the offline mode of operation, the normal and DR images in the database are subjected to image pre-processing. The pre-processed images are decomposed to second level using DWT. The energy values in all the eight sub-bands are computed. The significance of the extracted feature is evaluated using the Students t-test. The significant feature set and the ground truth of whether the images belong to normal or abnormal cases (as predicted by doctors or by lab results) are used as inputs to the classifier in order to train them to determine appropriate parameters for differentiating both classes based on the features. Then a ten-fold cross validation strategy is used in order to identify the best classifier. In the online mode, the significant features will be extracted from the input pre-processed fundus image. The extracted features are fed to the classifier for classification as normal or diseased.

Fig. 2The proposed system.

The layout of the paper is as follows: Section 2provides information about the database used for analysis, Section 3 deals with the technique used for pre-processing the raw retinal images and explains the brief theory behind the DWT and Section 4 discusses about the SVM classifier. Section 5presents the results obtained and the discussion on results obtained are discussed in Section 6. Finally the paper concludes in Section 7.

2. DATA ACQUISITION

The retinal images used for this work were taken using a TOPCON non-mydriatic retinal camera TRC-NW200. The built-in charge- coupled device (CCD) camera provides up to 3.1 megapixels for high quality imaging. The inbuilt imaging software was used to store the images in the JPEG format. The data was recorded at the Department of Ophthalmology, Kasturba Medical College, Manipal, India. The images were photographed and certified by the physicians of the Ophthalmology department. The ethics committee, consisting of senior doctors, approved the images for this research purpose. The images were taken with a resolution of 2588x1958 (rows x columns). Total of 240 normal and DR fundus images were collected for the study within an age group of 24 to 57 years. The number of retinal images in each group is given in Table 1.

Table 1 Number of retinal images in normal, and DR classes.

Normal

DR

Mild NPDR

Moderate NPDR

Severe NPDR

PDR

Total

120

25

40

30

25

120

3. METHODOLOGY

3.1 Pre-processing

The raw retinal images were resized to 740 x 576.Each of these images were subjected to pre-processing using adaptive histogram equalization in order to remove the non-uniformity of the background. Non-uniform illumination during image acquisition and variation in the color of the eye pigment are two major causes of non-uniformity. The objective of applying adaptive histogram equalization is to assign the intensity values to the pixels in the input image, such that there is uniform contrast across the output image.

3.2 Discrete Wavelet Transform based Features

The DWT captures both the spatial and frequency information of a signal. It analyzes the image by decomposing it into a coarse approximation via low-pass filtering and into detail information by high-pass filtering which results in convolution of the image and the corresponding filter impulse response [19]. Such decomposition is performed recursively on low-pass approximation coefficients obtained at each level, until the required number of iterations is reached.

Let the image be represented by a MÃ- N gray-scale matrix I [i, j], where each element of the matrix represents the gray scale intensity of a single pixel in the image. Each of the non-border pixels has eight adjacent neighboring pixels with pixel intensities. The image is low and high-pass filtered along the rows. The results of each filter are down- sampled by two. Each of these sub-signals is again low and high-pass filtered, but now along the column data and the results are again down-sampled by two. Hence, the original data is split into four sub-images each of size M/2 by N/2 and contains information from the different frequency components. The resultant 2-D DWT coefficients are the same irrespective of whether the matrix is traversed right-to-left or left-to-right.

Fig.3 depicts the coefficients obtained at level-1 discrete wavelet transformation, where I is the input image, l[n] and h[n] are the impulse response of low-pass and high-pass filters respectively and ↓ 2 denotes sub-sampling.

Approximation coefficients are obtained by low-pass filtering of rows and columns. Here the output image is similar to the original image and is termed as approximation coefficients (A1). Dh1(Detailed horizontal coefficients of level1) coefficients are obtained by low-pass filtering of rows and high-pass filtering of columns. Dv1 (Detailed vertical coefficients of level1) coefficients are obtained by high-pass filtering of rows and low-pass filtering of columns. The Dd1(Detailed diagonal coefficients of level1) coefficients are obtained on high-pass filtering of rows and columns. The sub-bands Dh1, Dv1 and Dd1 are the detailed sub-bands indicating the high-frequency component of the image. 2D DWT of the first level yields three resultant matrices, namely A1, Dh1, Dv1, and Dd1 whose elements are intensity values. Similarly, Dh2, Dv2, Dd2 and A2 are the resultant matrices of second level of 2D DWT.

Drawing3.jpg

Fig. 3Block diagram of wavelet decomposition.

Drawing2.jpgDrawing4.jpg

(a) (b)

Fig.4. DWT decomposition of fundus image (a) Normal (b) DR.

Fig.4 shows the 2D DWT decomposition of normal and DR image. It can be seen from the figure that, there are sudden changes in the high frequency sub-bands of DR images. The wavelet basis Bi-orthogonal wavelet 3.7is selected for decomposing the retinal images [20].

The energy values in the various sub-bands are evaluated using equation (1).

(1)

To test the significance of extracted energy features, Students t-test is performed. It is a statistical hypothesis test in which the test follows a Student's t distribution if the null hypothesis is supported. Null hypothesis assumed here is that there is no significant difference among the means of each of the feature in the various classes. In general, the null hypothesis is rejected if the p-value is less than 0.05, of the null hypothesis being true. The p-value is a measure of probability that a difference between means happened by chance. In our work, all the eight extracted features were clinically significant (p<0.0001).

4. SVM CLASSIFIER

Support Vector Machine (SVM) is set of related supervised learning methods used for classification and regression [21]. It belongs to a family of generalized linear classification. SVM minimizes the empirical classification error and maximizes the geometric margin. SVM uses a simple linear method on the data but in a higher dimensional feature space that is non-linearly related to the input space [22].

For classification, SVMs separate the different classes of data by means of a hyper-plane. The objective of SVM modeling is to find a separating hyper-plane which separates the two classes with an optimal margin [23]. Let the separating hyper-plane be defined by:

x.w +b = 0 (2)

Where b is scalar and w is p-dimensional vector. The vector w points perpendicular to the separating hyper-plane. Parallel hyper-planes can be described by equation,

w.x + b = 1 (3)

w.x + b = -1 (4)

If the training data are linearly separable, then we can select these hyper-planes so that there are no points between them and then try to maximize their distance. By geometry, we find the distance between the hyper-plane is 2 / │w│ and we need to minimize │w│. For linearly separable N number of data labeled{xi, yi}, i=1, 2… N, the optimum boundary chosen with maximal margin criterion is found by minimizing the objective function,

E = (5)

subject to (xi.w +b) yi ≥ (1- ζi) for all i, where ζi is a "slack" variable that represents the amount by which each point is misclassified, L is a cost function and C is a hyper-parameter that trades-off the effects of minimizing the empirical risk against maximizing the margin. It controls the penalty paid by the SVM for misclassifying a training point and thus the complexity of the prediction function.

The solution for optimum boundary w0 is a linear combination of a subset of training patterns that lie on the margin called the support vector s, where s ∈ {1 … N}. The support vectors carry all the relevant information about the classification problem. Most of the real-world data are not linearly separable. In such cases, kernel functions are used. The kernel functions return the inner product between two points in a suitable feature space, thus defining a notion of similarity, with little computational cost even in very high-dimensional spaces. The idea of the kernel function is to enable operations to be performed in the input space rather than the potentially high dimensional feature space. Hence the inner product does not need to be evaluated in the feature space. We want the function to perform mapping of the attributes of the input space to the feature space.

Three kernels of SVM classifier were studied and evaluated using the eight features obtained after wavelet based decomposition in order to select the best kernel function.

Linear : K(xi , xj) = xiTxj (6)

Polynomial : K(xi , xj) = ( xi . xj+1)d, (7)

Radial basis function (RBF) : K(xi , xj) = (8)

Where σ and d are the kernel parameters.

The linear kernel is useful when dealing with large sparse data vectors. The polynomial kernelis popular in image processing and Radial Basis Function (RBF) kernel is useful whenthere is no prior knowledge of the data [24].

We have executed a 10-fold cross validation in order to select the training and testing datasets. The whole dataset was split up into ten folds such that each fold contains approximately the same proportion of class samples as that in the original dataset. Nine folds of the data (training set) were used for classifier development and the built classifier wasevaluated using the remaining one fold (testing data). This procedure was repeated ten times using a different part for testing each time. The performance of the classifiers was evaluated using the following measures: positive predictive value (PPV), specificity, sensitivity and accuracy of the system. Then the average of ten folds was used to calculate the PPV, specificity, sensitivityand accuracy.

The positive predictive value of a test is the probability of a patient testing positive, actually having the disease. The specificity of the test is the proportion of people without the disease who have a negative test. The higher the specificity the lower will be the false positive rate. The sensitivity of a test is the proportion of people with the disease who have a positive test result. Higher the sensitivity, greater will be the detection rate and the lower is the false negative (FN) rate. Accuracy is the proportion of true result for the given database.

5. RESULTS

Table 2 shows the DWT energy features extracted in level1 and level2. It can be seen from the table that, these features are clinically significant (p<0.05) and be used for automated classification. The energies in the approximate coefficients are higher in the DR class due to the textures of exudates, micro aneurysms, and hemorrhages. Similarly, the energies in the detailed coefficients are lower because, the number of blood vessels decreases as the DR stage progresses.

Table 2 Range of the wavelet based features (energies) extracted from the retinal images.

Features

Normal

DR

p-value

A2

5.828E+03 ± 1.475E+03

9.939E+03 ± 2.346E+03

< .0001

A1

5.663E+03 ± 1.437E+03

9.647E+03 ±  2.272E+03

< .0001

Dh2

8.29 ± 10.3

3.50 ± 5.08

< .0001

Dh1

0.593 ± 0.682

0.258 ±  7.738E-02 

< .0001

Dv2

12 ± 11.5

8.88 ± 4.56

<.0001

Dv1

1.26 ± 1.55

0.661 ± 0.329

< .0001

Dd2

1.69 ± 1.97

0.774 ± 0.472

< .0001

Dd1

7.253E-02 ± 6.524E-02

4.553E-02 ±  6.546E-03

< .0001

For the normal images the contrast is zero where as it is high for DR due to the presence of hard exudates, micro aneurysms and hemorrhages. Also it is observed that the value of the blood vessel area shows a gradual increase from normal to DR images. Hence we can conclude that with the increase in the severity of DR, the approximation co-efficient increases, while detailed co-efficient value decreases.

Table 3 shows the performance of the SVM classifier for various feature combinations. It can be seen from the below table that, energy features of Dh2, Dh1 and Dd2yielded the highest average accuracy, average sensitivity and average specificity of 99.17% for polynomial kernel of order 3.

Table 3 Performance measures for the SVM classifier.

SVM Kernels

PPV %

Sensitivity %

Specificity %

Accuracy %

Using approximation coefficients(A2 and A1)

Linear

70.48

61.67

74.17

67.92

Polynomial order 2

74.39

50.83

82.50

66.67

Polynomial order 3

76.47

54.17

83.33

68.75

Polynomial order 4

73.04

70.00

74.17

72.08

RBF

62.90

65.00

61.67

63.33

Using horizontal detail coefficients(Dh2 and Dh1)

Linear

97.3

90.00

97.5

93.75

Polynomial order 2

97.56

100.00

97.5

98.75

Polynomial order 3

98.33

98.33

98.33

98.33

Polynomial order 4

98.33

98.33

98.33

98.33

RBF

97.56

100

97.50

98.75

Using vertical detail coefficients(Dv2 and Dv1)

Linear

83.33

87.50

82.50

85.00

Polynomial order 2

92.92

87.50

93.33

90.42

Polynomial order 3

93.04

89.17

93.33

91.25

Polynomial order 4

92.04

86.67

92.50

89.58

RBF

91.53

90.00

91.67

90.83

Using diagonal detail (Dd2 and Dd1)

Linear

91.38

88.33

91.67

90.00

Polynomial order 2

99.12

94.17

99.17

96.67

Polynomial order 3

96.69

97.50

96.67

97.08

Polynomial order 4

96.69

97.50

96.67

97.08

RBF

96.69

97.50

96.67

97.08

Using horizontal detail coefficients(Dh2 and Dh1) and diagonal detail coefficient(Dd2)

Linear

94.69

89.17

95

92.08

Polynomial order 2

97.54

99.17

97.50

98.33

Polynomial order 3

99.17

99.17

99.17

99.17

Polynomial order 4

97.54

99.17

97.50

98.33

RBF

97.56

100

97.50

98.75

Using horizontal detail coefficients(Dh2 and Dh1) and diagonal detail coefficient(Dd1)

Linear

97.30

90.00

97.50

93.75

Polynomial order 2

97.54

99.17

97.50

98.33

Polynomial order 3

98.32

97.50

98.33

97.92

Polynomial order 4

97.50

97.50

97.50

97.50

RBF

97.56

100

97.50

98.75

Using horizontal and diagonal detail coefficients(Dh2 and Dh1) and diagonal detail coefficients (Dd2 and Dd1)

Linear

94.83

91.67

95.00

93.33

Polynomial order 2

98.35

99.17

98.33

98.75

Polynomial order 3

99.16

98.33

99.17

98.75

Polynomial order 4

98.35

99.17

98.33

98.75

RBF

97.56

100

97.50

98.75

Diabetic Retinopathy RiskIndex (DRRI)

We have also developed a Diabetic Retinopathy Risk Index (DRRI) using clinically significant features A1, Dh1, Dv1, Dd1, A2, Dh2, Dv2 and Dd2. The index, represented using equation (9), and can be used as an indicator for determining the condition of the patient. The DRRI ranges for the two classes are shown in Table 4 and Fig.5. It clearly illustrates the discriminatory ability of DRRI for the two classes.

(9)

Table 4 Range of DRRI values for normal and DR classes.

Stages

Normal

DR

p-value

Index

3.7502 ±0.1182

3.9868 ± 9.1883E-02

<0.0001

Fig. 5 Box plot of the DRRI.

DR.JPG

Fig. 6 Snap shot of graphical user interface of theproposed system.

The snap shot of graphical user interface for our proposed system is shown in Fig. 6. The image to be diagnosed is selected by clicking on the 'Browse...'button. On clicking the 'Extract Features' button, image pre-processing is performed and eight DWT features are extracted and displayed at their respective text buttons. Finally, the 'Compute' button is clicked to calculate the DRRI for the chosen image. The DRRI value (4.1) is displayed in the 'Index' text box and diagnosis of the unknown image (DR) is displayed in the Condition of patient text box.

6. DISCUSSION

The detailed coefficients of horizontal and detailed coefficients extracted indicate the subtle changes in the normal and DR images like variations in morphological changes like blood vessels, exudates, hemorrhages, and microaneurysms.It can be clearly seen from the Table 2 that, all these features (Dh1, Dh2, Dd1 and Dd2) show higher value for normal as compared to the DR class (p<0.0001). The presence of exudates, hemorrhages reduces the subtle changes in the retinal images of DR classes and hence, these features are more dominant in normal retina than the DR retina. We have used these more discriminatory features to classify the two classes using SVM classifiers, which yielded highest accuracy of 99%. Also, we have used all the extracted features in the form of an equation (9) to discriminate the two classes using just a single value. It can be seen from the Table 4 that, these values very distinct. We have obtained this mathematical equation by trial and error method.

In this study, we have made an effort to automatically detect the normal and DR classes using DWT method. Mild NPDR, Moderate NPDR, Severe NPDR and PDR classes form the DR class. The presence and subtle changes in the blood vessels, exudates, micro aneurysms, and hemorrhages can be deciphered using DWT method.Table 5 summarises the studies conducted for the automated identification of normal, and DR classes.

Five-class (normal, mild DR, moderate DR, severe DR, and PDR) analysis

Four bi-spectral invariant features of higher-order spectra techniques and SVM classifier were used to classify in to normal, mild NPDR, moderate NPDR, severe NPDR, and PDR with an accuracy of 82%, sensitivity of 82% and specificity of 88% [25]. Same group have classified using SVM classifier the fundus images in to five classes using hemorrhages, micro aneurysms, exudates and blood vessel areas with an accuracy of more than 85%, sensitivity of more than 82% and specificity of more than 86%[26].

Four-class (normal, moderate NPDR, severe DR, and PDR) analysis

Fundus images have been classified in to normal, mild NPDR, moderate NPDR, severe NPDR and PDR stages using area and perimeter of the RGB components of the blood vessels and feed forward neural network with an accuracy of 84% and sensitivity, specificity of 90% and 100% respectively [17].

Three-class (normal, NPDR and PDR) analysis

Fundus images have been classified in to normal, NPDR and PDR classes using the area of the exudates, blood vessels and texture parameters [5], Theydemonstrated an accuracy of 93%, sensitivity of 90% and specificity of 100%.Another group have classified in to same three classes using Hard exudates, cotton wool spots and hemorrhages and neural network[27]. They were able to classify the normal and NPDR with an accuracy of 82.6% and PDR with 88.3% accuracy.

Two-class (normal and DR) analysis

Using hemorrhages, hard exudates and microaneurysms, DR and normal retina were differentiated with a sensitivity of 80.21% and specificity of 70.66% using artificial neural network (ANN) classifier [28].Yellow and red colored lesions were used classify the normal and DR classes with sensitivity, specificity and accuracy of 71.4%, 96.7%and 85% respectively [10]. Red and bright lesions in the fundus images coupled with ANN were able to differentiate normal and DR images with a sensitivity of 95.1% and specificity of 46.3% [11].Microaneurysmsin combination with Bayesian framework were able to automatically classify DRand normal images with a sensitivity and specificity of 100% and 67% respectively [29]. Vasculature and red lesions of the fundus images were used as features for the k-nearest neighbor classifier and reported a sensitivity and specificity of 100% and 87% respectively in classifying normal and DR classes [30].The exudates extracted from the retinal images were fed to the Fuzzy C-Means (FCM) clustering technique [31] and classified normal and DR classes with a sensitivity of 96%, specificity of 94.6% and an accuracy of 90.1%.

Table 5 Summary of the studies made on automated detection of DR found in the literature.

Reference

Features

Classifier used/method

Sensitivity (%)

Specificity (%)

Accuracy (%)

Five class classification

Acharyaet al.[25]

Higher order spectra

SVM

82.5

88.9

82

Acharya et al.[26]

Blood vessels, exudates, HMA

SVM

82

86

85.9

Four class classification

Wong et al.[17]

Blood vessel perimeter

Neural network

90

100

84

Three class classification

Lee et al.[27]

Hard exudates, cotton wool spots and HMA

Neural network

Normal- 82.6%

NPDR -82.6%

PDR - 88.3%

Nayak et al.[5]

Hard exudates, blood vessel, and contrast

Neural network

90

100

93

Two class classification

Sinthanayothin et al. [28]

Hemorrhages,microaneurysms andexudates

Neural network

80.21

70.66

Not specified

Larsen et al.[10]

Red lesions

Automated image-analysis algorithm

71.4

96.7

85

Usher et al.[11]

Red and bright lesions

Neural network

95.1

46.3

Not specified

Kahai et al.[29]

Micro aneurysms

Bayesian

100

67

Not specified

Niemeijer et al.[30]

Vasculature and red lesions

KNN

100

87

Not specified

Osareh et al.[31]

Exudates

Fuzzy C-means

96

94.6

90.1

Our Method

DWT based features

SVM

99.17

99.17

99.17

In this work, the energy features derived from the DWT sub-bands, were fed to the SVMclassifier with the polynomial kernel of order 3yielded theaverage accuracy, sensitivity andspecificity of 99.17%. These energy features of detailed coefficients in the level 1 and level 2 of DWT were able to capture the subtle changes in the DR fundus images due to blood vessels, exudates, micro aneurysms, and hemorrhages. The advantage of our system is that, it is cost effective and fast. Extracting the energies of the detailed coefficients of the DWT and estimating the DRRI is not time consuming. Just one number (DRRI) is able to classify the two classes easily. Also, we have developed automated DR system to classify the normal and DR classes using SVM classifier. Our proposed graphical user interface is so simple that, even a nurse can use it.

However, the performance of the system needs to be tested with huge diverse database. In this work, we have tested only with 240 fundus images. The classification efficiency may drop, with the increase in the number of fundus images. Then the classification accuracy can be further improved by using better features like, textures, distance between the fovea and exudates, and area of blood vessels, exudates, microaneurysms, and haemorrhages etc.

7. CONCLUSION

In this paper, we have proposed a computer-aided diagnosis system to detect the normal and DR classes using digital fundus images. Three DWT energy features were extracted from the digital fundus images and fed to the SVM classifier for automated detection. We have obtained an average accuracy, sensitivity and specificity of more than 99% using ten-fold cross validation. We have also developed a graphical user interface of this system. It is very easy to use and delivers the diagnosis result immediately. However, it is very difficult to get 100% accuracy for such systems. The performance of such systems depends on factors such as the size and quality of the training features, the robustness of the training and features extracted.

Finally, we have gone one step further and formulated DRRisk Index (DRRI) composed of four DWT energy parameters, given by equation (9). This DRRI can be employed for the diagnosis of DR class, and is found to effectively distinguish and diagnose diabetic retinopathy subjects and normal subjects, as shown by Table 4 and Figure 5. The advantage of this DRRI is that, just one number can be used to diagnose normal and DR class.

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.