Retinal Blood Vessel Segmentation Biology Essay


Currently, there is an increasing interest for setting up systems and algorithms that can screen a large number of people for sight threatening diseases, such as DR and then provide an automated detection of the disease. Digital image processing is now becoming very practical and a useful tool for DR screening. Digital imaging can offer a high quality permanent record of the retina, which can be used by ophthalmologists for the monitoring of progression or response to the treatment. Indeed, digital images have the potential to be processed by automatic analysis systems. Retinal image analysis is a complicated task, because of the variability of the images in terms of the colour/gray levels, the morphology of the retinal anatomical pathological structures and the existence of particular features in different patients, which may lead to an erroneous interpretation. Several examples of the application of digital imaging techniques in identification of DR can be found in the literature. There have been few research investigations to identify retinal main components such as blood vessels, optic disk, fovea and retinal lesions including microaneurysms, haemorrhages, and exudates in the literature [13-17]. The major contributions to extract the normal and abnormal features of fundus images are described in this chapter.

Lady using a tablet
Lady using a tablet


Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

This chapter is organized as follows. In Section 2.1, the literature corresponding to blood vessel segmentation is reviewed. The major works related to localization and contour detection are discussed in Section 2.2. Section 2.3 reviews the literature of fovea and macula detection. In Sections 2.4 and 2.5, the background information of bright lesions and red lesions is presented.


Several studies have been carried out on the detection or enhancement of blood vessels in general but only a small number of them are related to retinal blood vessels in particular. In order to review the methods proposed to identify vessels in retinal images, seven classes of algorithms have been considered: matched filters, vessel tracking, morphological processing, region growing, multiscale, supervised and adaptive thresholding approaches.

2.1.1. Matched Filters

Matched filters were based on a correlation measure between the expected shape sought for and the measured signal. The algorithm presented by Chaudhuri et al. [18] was based on directional 2-D matched filter. To enhance retinal vasculature a two-dimensional matched filter kernel was designed to convolve with the original fundus image. The kernel was rotated into many different orientations (usually eight or 12) to fit into vessels of different configuration. A number of kernel shapes have been investigated. Gaussian kernels were used in [18-20]. Kernels based on lines [21] and partial Gaussian kernels were also used [22].

A number of strategies have also been proposed to identify true vessels from the matched filter response. Region based threshold probing of the matched filter response was used by Hoover et al. [20]. An amplitude modified second order differential Gaussian filter was proposed by Gang et al. [23] to detect vessels at scales that match their widths. This was achieved by changing the amplitude, so that responses can be combined over scales. Local entropy based thresholding was proposed by Chanwimaluang et al. [23].

2.1.2. Tracking Methods

The tracking methods look for a continuous vessel segment starting from a point given either manually or automatically, depending on certain local information [25-31], usually trying to find the path that best matches a vessel profile model. Sobel edge detectors, gradient operators and matched filters were used to find the vessel direction and boundary. Even though these methods were confused by vessel crossings and bifurcations [32-33], they can provide accurate measurements of vessel widths and tortuosity.

2.1.3. Morphological Processing

To extract the blood vessels of a retina, mathematical morphology can be used since the vessels were the patterns that exhibit morphological properties such as connectivity, linearity and curvature of vessels varying smoothly along the crest line. But background patterns also fit such a morphological description. In order to differentiate vessels from other patterns, cross - curvature evolution and linear filtering were employed by Zana et al. [34]. A two stage method was applied to extract vasculature by Fang et al. [35]. First, the vessels were enhanced by mathematical morphology filtering coupled with curvature evolution. The major drawback of this approach was that important features like bifurcation and intersection points may be missed out. To recover the complete vessel network a reconstruction process using dynamic local region growing was performed.

2.1.4. Region Growing Approaches

Lady using a tablet
Lady using a tablet


Writing Services

Lady Using Tablet

Always on Time

Marked to Standard

Order Now

In region growing approaches, it was assumed that, pixels that were close to each other and possessing similar intensity levels were expected to belong to the same object. These approaches recruit pixels incrementally in a region starting from a seed point based on predefined criteria [36-38]. The criteria used for segmentation were value similarity and spatial proximity. The limitation of region growing approaches is that they often require user supplied seed points. Region growing may result in holes and over segmentation because of the variations in image intensities and noise. Thus, post-processing of the segmentation result was often necessary.

2.1.5. Multiscale Approaches

Vessel segmentation in multiscale approaches was performed by varying image resolutions [39-45]. One advantage of this technique was its effective processing speed. In these approaches larger blood vessels were extracted from low resolution images and fine vessels were extracted at high resolution.

A method based on the multi-scale analysis of the first and second order spatial derivatives of the intensity image was used for the detection of blood vessels of different widths, lengths and orientations [40]. A two stage region growing procedure was used in this method. The growth was constrained to regions with low gradient magnitude in the first stage. In the second stage, this constraint was relaxed to allow borders between regions to be defined.

A vascular modeling algorithm was proposed by Wang et al. [45] based on a multiresolution image representation. A two-dimensional Hermite polynomial was used to model the retinal vasculature in a quad-tree structure over a range of spatial frequency resolutions. The parameters of the hermite model were estimated using expectation maximization type optimization technique upon which the information based process was then employed to select the most appropriate scale or model for modeling each region of the image. The vascular network was segmented by Softka et al. [46] based on response of multiscale matched filters, vessel confidence measure, gradient at the boundary of vessels, and the edge strength at the boundary.

2.1.6. Supervised Methods

Recently, several supervised methods [47-49] focusing on 2D retinal images have been explored to get better results. Two retinal vessel segmentation methods based on line operators were proposed by Perfetti et al. [47]. The response of the line detector was thresholded to obtain unsupervised pixel classification in the first segmentation method. Two orthogonal line detectors were employed in the second segmentation method along with the gray level of the target pixel to construct a feature vector for supervised classification using the Support Vector Machines (SVM).

A pixel based classification method was developed to segment blood vessels [48]. This method classified each image pixel as vessel or non-vessel, based on the pixel’s feature vector. The feature vectors consist of pixel’s intensity and two-dimensional Gabor wavelet transform responses taken at multiple scales. The classification of the feature vectors was done using a Bayesian classifier with Gaussian mixture model.

Image ridges were extracted by Staal et al. [49] that were used to compose primitives in the form of line elements. Using these line elements an image was partitioned into patches by assigning each image pixel to the closest line element. A local coordinate frame was constituted for every line element for its corresponding patch. Using the properties of the patches and the line element’s feature vectors were computed for every pixel. Subsequently these feature vectors were classified using a nearest neighborhood classifier and sequential forward feature selection.

2.1.7. Adaptive Thresholding Methods

The features of the vasculature were captured by using nonlinear orthogonal projections in Zhang et al. [50] and a local adaptive thresholding algorithm was employed for vessel detection. Knowledge-guided adaptive thresholding was employed by Jiang et al. [51] to segment vessel network. Multi-threshold probing was directly applied to the image through a verification procedure which makes use of a curvilinear structure model. The relevant information about objects, including shape, colour/intensity, and contrast was incorporated, which guides the classification procedure.

The methods discussed above for vessel segmentation can work well to extract the major parts of vasculature. However, the major challenges confronting the above vessel segmentation methods are:

Segmentation of the thinner vessels as the image contrast is generally low around thin vessels;

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

The presence of lesions as they may be mis-enhanced and mis-detected as blood vessels.

In order to solve these problems, a new method for vessel segmentation is proposed in this thesis. The method uses the intensity information from red and green channels of the same retinal image and the thresholding based on local relative entropy with histogram compression and translation.


Reliable and efficient optic disk localization and boundary estimation are significant tasks in an automated DR screening system. Optic disk localization is required as a prerequisite for the succeeding stages in many algorithms applied for identification and segmentation of the anatomical and pathological structures in retinal images. The diameter of the optic disk is used as a reference length for measuring distances and sizes. Precise localization of optic disk boundary is very useful in proliferative DR, where fragile vessels develop in the retina, largely in the optic disk region, in response to circulation problems created during earlier stages of the disease.

If the optic disk is identified, the position of areas of clinical importance such as the fovea may be determined. The location of optic disk can be used as a land mark for retinal image registration. As optic disk is the origin of major blood vessels and nerves, it may be used as a starting point for vessel tracking methods. Many schemes have been proposed to localize optic disk. Most of the works refer only to locate the optic disk but do not address the problem of optic disk boundary detection. Reliable optic disk localization is surprisingly difficult, due to its highly variable appearance in retinal images.

2.2.1. Localization of Optic Disk

The algorithms [52-54] localize the optic disk by finding the largest clusters of pixels with high intensities. The region with highest intensity variation of adjacent pixels was identified as optic disk [14]. The intensity variance of adjacent pixels was evaluated by using 80 x 80 sub-image, and the point with the largest variance was marked as the optic disk location. These algorithms did not consider the retinal images having bright lesions. However, retinal images with small lesions have been considered by Lalonde et al. [54].

The method proposed by Hoover et al. [55] uses the convergence of the blood vessel network as the primary feature for optic disk detection. In this method, the optic disk was identified as the focal point of the blood vessel network. The convergence of the vessel network was detected by finding the end points of the linear shapes such as blood vessels. A combination of mathematical morphology and watershed transformation was used to detect optic disk [56]. In this method, a shade correction technique was applied to reduce the contrast of the hard exudates and to eliminate the slow background variations. Local gray level intensity variance of neighbouring pixels was applied to the shade corrected image to estimate the locus of the optic disk. Watershed transformation was used to locate the optic disk boundaries. Osareh et al. [57] proposed template based optic disk localization. This approach employed colour normalization of retinal images followed by template matching.

A hybrid approach was used to localize optic disk based on intensity and vessel structure based features [58]. First, the candidate optic disk locations were derived based on curvature information that was used to detect hill type topographical feature which inherently encodes intensity features. Each candidate was assigned a confidence measure that was derived using vessel structure information. Consequently, a candidate location possessing the highest value was considered an optic disk. In the algorithm proposed by Ying et al. [59] candidate optic disk regions were first selected by finding all bright spots in a local surrounding followed by obtaining the binary large blood vessel skeleton map using the method proposed by Zhang et al. [60]. In this method, Fractal analysis was applied around the candidate areas. The candidate area with the highest fractal dimension was considered as the optic disk. Optic disk is the area where all major vessels merge. So this area presents the highest fractal dimension compared to other bright regions.

The algorithms proposed by Tamura et al. [52], Liu et al. [53] and Sinthanayothin et al. [14] fail to localize the optic disk when large hard exudates coexist in retinal image. Differentiating the optic disk from large exudates became a challenge. These methods obtained a satisfactory result only in normal retinal images where optic disk was bright and observable. The algorithm proposed by Osareh et al. [57] assumed that the optic disk is approximately circular and consisting of bright pixels. This algorithm failed when optic disk is not the biggest and the brightest part in the fundus image.

To localize the optic disk accurately, a new method is proposed in this thesis based on the blood vessel information in optic disk region. This method localizes the optic disk by finding the vessel branch with highest number of blood vessel connections.

2.2.2. Contour Detection of Optic Disk

The contour of the optic disk is used to assess the progress of eye disease and treatment results. The contour of the optic disk was estimated as a circle or an ellipse [52 - 54], because the shape of the optic disk is round or vertically slightly oval. A 2D Hough transform was employed to obtain the estimated circle of the optic disk based on the result of edge detection [52-53]. The contour of the optic disk was estimated using Hausdorff-based matching between the detected edges and templates of circles with different sizes [54]. Estimation of the shape of the optic disk as a circle or an ellipse cannot provide enough information to ophthalmologists. As the shape of the optic disk is important to diagnose eye diseases, the exact boundary detection of optic disk has to be investigated.

Snakes were applied to detect the exact contour of the optic disk [61 - 63]. The major advantage of these algorithms was their ability to bridge discontinuities in the image feature being located. However, the methods proposed in the above papers were not fully automatic due to the requirement of manual initialization. The main difficulty to apply these methods to disk boundary detection was how to remove the influence of blood vessels. The optic disk boundary was segmented in two stages [63]. Firstly, the original retinal image was preprocessed based on local minima detection and mathematical morphology in order to eliminate the interfering blood vessels and detect the optic disk boundary accurately. Gray level dilation was performed first to remove the blood vessels in the optic disk region followed by an erosion to restore the boundaries to their former position. A structuring element of 5 x 5 pixels was used. A morphological reconstruction operator was then applied by maintaining the maximum of the dilated/eroded image and the original one. The optic disk contour was located using deformable active contours technique. The snake was based on an external image field called Gradient Vector Flow (GVF). This method was tested against a set of 9 retinal images and the author reported accurate optic disk boundary localization in all the test images. The algorithms proposed by Viranee et al. [64] and Osareh et al. [57] also used GVFs to detect the boundary of the optic disk.

A modified active shape model was proposed by Li et al. [65] to detect the boundary of the optic disk. A point distribution model built from a training set was used. An iterative searching procedure was applied to locate the instance of such shapes in a new image.

For retinal images having ill-defined optic disk and for images having fuzzy elliptic disk, algorithms such as 2D Hough transform [52] and GVF [57, 63, 64] failed to detect the boundary of optic disk.

A new method based on Geometric active contours is proposed in this thesis to estimate the boundary of the optic disk. The proposed method uses mathematical morphology in Lab space and Geometric active contours with variational formulation to detect the contour of optic disk. This method works accurately even when the boundary of optic disk is not continuous or blurred


The position of an abnormality relative to the location of fovea is useful for effective diagnosis of DR and other retinal diseases. Fovea is the centre of macula and is present at approximately 2.5 times the optic disk diameter from the optic disk [14]. The macula is commonly a hazy darker area than the surrounding retinal tissue. To detect macula and fovea, template matching approach was used by Sinthanayothin et al. [14]. The template was a Gaussian blob. A model based approach was used to detect fovea [65]. In this approach, information derived from an active shape model was used to detect fovea. A single point distribution model was used to detect fovea [66]. This method used a cost function that depends on combination of both global and local cues to locate the exact position of the model points. An appearance-based localisation method using different image channels was applied to detect fovea [67]. The method proposed by Sagar et al. [68], first finds the vessel pixels and then detects macula by finding the darkest cluster of pixels near the optic disk.

A new algorithm is developed in this thesis to detect fovea based on the information regarding blood vessels and optic disk. The algorithm is robust to the presence of pathologies and bad illuminated regions that have similar appearance to fovea.


Among lesions caused by DR, bright lesions are one of the most commonly occurring lesions. They are associated with patches of vascular damage with leakage. The size and distribution of bright lesions may vary during the progress of the disease. The detection and quantification of bright lesions will significantly contribute to the mass screening and assessment of back ground DR. Here, the major bright lesion identification methods in the literature are reviewed.

Philips et al. [69-70] used a two step strategy to segment bright lesions. Firstly, colour retinal images were shade corrected to eliminate the non-uniformities. Secondly, the contrast of the exudates was enhanced. Global and local thresholding values were used to segment the hard exudates from the retinal images. The lesion-based sensitivity of this method was reported between 61% and 100% (mean 87%) [70]. The algorithm proposed by Ward et al. [71] also used a two step strategy. First the fundus image was pre-processed to reduce shade variations in the image background and to enhance the contrast between the background and the exudate lesions. The bright lesions were segmented from the background on a brightness or gray level basis. This algorithm required user intervention for selecting the threshold value.

A dynamic thresholding algorithm was developed to segment exudates from retinal images [53]. The retinal images were first divided into 64 x 64 pixel patches. Using the histogram of each patch, a local threshold was calculated. The dynamic threshold of every pixel was found using interpolation of the local thresholds of 4 neighbouring patches which include that pixel.

A prototype was presented by Goldbaum et al. [13] on automated diagnosis and understanding of retinal images. Features like object colour, border colour, texture measures, compactness, area, edge gradient and turns per length of the border were used to segment bright lesions. This method achieved an accuracy of 89% for identification of the bright objects.

Ege et al. [72] used a combination of template matching, region growing and thresholding techniques to detect bright lesions. Before applying these techniques, the retinal images were preprocessed using median filter to eliminate noise. Using Bayesian classifier the bright lesions were then classified into exudates, cottonwool spots and noise. The classification performance for this stage was 62% for exudates and 52% for the cottonwool spots. A minimum distance discriminant classifier was used by Wang et al. [73] to classify each pixel into yellow lesions (exudates, cottonwool spots) or non-lesions (vessels and background). The image based diagnostic accuracy of this approach was reported as 100% sensitivity and 70% specificity.

In addition to the discussed techniques above, neural networks were also exploited to classify the retinal abnormalities in a few studies. Gardner et al. [74] divided the retinal images into sub-images of size 20 x 20. Subsequently, these sub-images were applied to a back propagation neural network which was trained for five days with 400 inputs. This technique detected the blood vessels, exudates and hemorrhages. The sensitivity of the exudate detection method was 93.1%. This performance was the result of classifying the whole 20 x 20 pixel patches rather than a pixel resolution classification. Neural networks were used by Hunter et al. [75] to classify bright lesions. In this method, the retinal image was divided into 16 x 16 sub-images and eleven inputs were used to train the neural network. This work intended to discriminate the exudates from drusen and achieved 91% lesion-based performance.

A recursive region growing algorithm was used to segment exudates in fundus images [76]. A sensitivity of 88.5% and specificity of 99.7% were reported. However, these performances were measured based on 10 x 10 patches. Gray level variation of the exudate candidates in the green channel was used to segment exudates [77]. After initial localization, using mathematical morphology techniques, the contours of the exudates were subsequently determined. The approach used three parameters for detecting exudates.

Size of the local window, which was used to calculate the pixel local variation.

First threshold that was used to find the candidate exudate regions.

The second threshold that represents the minimum value by which a candidate must differ from its surrounding background pixels to be classified as exudates.

The mean sensitivity and mean predictivity achieved by this method were 92.8% and 92.4% against a set of 15 abnormal retinal images.

The method proposed by Osareh et al. [78] first normalizes the fundus image using histogram specification. Local contrast enhancement was performed to improve both the contrast of lesions against the background and the overall colour saturation in the image. This was followed by Fuzzy C-Means (FCM) clustering to segment probable exudate candidates. Multilayer perceptron neural network with ten inputs was used to classify the exudate candidates from non-exudates. This method achieved a sensitivity of 92% and specificity of 82%. Same authors used SVMs to classify the exudate candidates from non-exudates and achieved a sensitivity of 87.5% and specificity of 92% [79]. The algorithm proposed by Sopharak et al. [80] also used FCM clustering to segment exudates for non-dilated retinal images. They used four dominant features hue, standard deviation, intensity and adaptive edge by FCM to get coarse segmentation followed by fine segmentation using morphological reconstruction.

Zhang et al. [81] presented a three-stage approach to detect bright lesions and classifying them into exudates and cottonwool spots. Firstly, local contrast enhancement was applied as a preprocessing stage. An Improved Fuzzy C-Means was applied in Luv colour space to segment all candidate bright lesion areas. Finally, a hierarchical SVM classification stage was used to classify bright lesions from non-lesions. The authors also classified exudates and cottonwool spots using a polynomial kernel in the SVM classification. The method classifies bright lesions and bright non-lesions with a sensitivity of 97% and specificity of 96%. In classifying exudates and cottonwool spots this method showed a sensitivity of 88% and specificity of 84%.

The algorithm proposed by Li et al. [65] divides the retinal image into 64 sub-images and exudate detection was performed in each subimage. A combined method of region growing and edge detection was applied to detect the exudates. The sensitivity and specificity of this algorithm for detecting exudates were 100% and 71% respectively.

Image contrast was firstly enhanced by means of a neurofuzzy subsystem, where properly codified fuzzy rules were implemented using a sparsely-connected (4x4) - cell Hopfield-type neural network [82]. Enhanced contrast images were then properly segmented to isolate suspect areas in binary output images after computing the optimally global threshold by a neural network based subsystem.

To detect the bright lesions, an new method based on Spatially Weighted Fuzzy C-Means (SWFCM) clustering is proposed in this thesis. The weight in the SWFCM algorithm is inspired by K-Nearest Neighbor (KNN) classifier by considering the neighborhood influence on the central pixel, is modified to improve the performance of clustering. Due to the consideration of the neighborhood information, the proposed method is noise resistant. As the gray level histogram of image is used instead of the whole data of image, the computation time is very less for the proposed approach compared to other FCM based techniques discussed above.


Microaneurysms and Hemorrhages are the red lesions found in the retinal images. Microaneurysms appear in the very early stages of DR and hemorrhages appear in the proliferative DR stage. Hence, detection of former tells us detect the disease at the earliest and later tells whether DR is in advanced stage or not. For this reason, the detection of these two dark lesions is very important. Microaneurysm and hemorrhage counts are very good indicators of progression of the disease.

Several methods for detecting red lesions have been reported in the literature. The red lesion detection algorithm proposed by Marino et al. [83] contains three stages: firstly, a set of correlation filters were applied to extract candidate red lesions. In the second stage, a region growing segmentation was applied to reject candidate red lesions whose size does not fit in the red lesion pattern. Finally, in the third stage three tests i.e. a shape test, an intensity test and a test to remove the points which fall inside the vessels (only lesions outside the vessels were considered) were used to find true red lesions.

The brightness of the fundus image was changed by the nonlinear curve with brightness values of the Hue Saturation Value (HSV) space [84]. Gamma correction was employed to emphasize brown regions on each red, green and blue-bit image. The hemorrhage candidates were detected using density analysis. Finally, false positives were removed by using rule-based method and 3 Mahalanobis distance classifiers with a 45-feature analysis.

Gray level grouping based contrast enhancement was used to enhance the contrast of the green channel [85]. Then candidate red lesions were extracted by employing automatic seed generation. Spatio-temporal feature map classifier was used to classify true red lesions from non-red lesions.

Abhir et al. [86] had applied an orientation matched filter to the preprocessed retinal image. The output of orientation matched filter was thresholded to obtain a set of potential candidates. Eigen image analysis was used to eliminate certain noise artifacts which resemble the shape profile of microaneurysms. Finally a second threshold was applied on the Eigen-space projection of the candidate regions to remove the false positives.

A novel red lesion detection method was presented by Niejmer et al. [87] based on pixel classification and morphology based segmentation. In pixel based classification, vasculature and red lesions were separated from the background of the image using KNN classifier. The dark lesion objects were classified using extensive number of features and a KNN classifier.

Microaneurysms and hemorrhages were treated as holes and morphological filling was performed on the green channel to identify them [88]. The unfilled green channel image was then subtracted from the filled one and thresholded in intensity to yield an image (R) with microaneurysm patches. To remove noise vessel segments, the full blood vessel network skeleton was dilated and subtracted from the image R. The remaining patches were further classified using intensity properties and a colour model based on the detected blood vessels.

Candidate microaneurysms were detected by taking the Maximum of Multiple Linear Top-Hats (MLTH) applied to the inverse image [89]. MLTH was adapted to detect larger objects like hemorrhages at multiple scales by repeating with multiple structuring elements [90]. Later the candidate hemorrhages were classified by using SVM classifier.

A generalized Eigen vector was applied to get the locations of microaneurysms [91]. The probable locations of the microaneurysms were determined by finding the position of the highest absolute value of the second smallest Eigen vector. These locations were analyzed using specific features of microaneurysms to identify the true microaneurysms.

A four step strategy was employed to extract microaneurysms [92]. Firstly, local contrast enhancement was used for preprocessing. Using the definition of bounding box closing, small details were extracted. An automatic threshold depending on image quality was calculated. Finally, false positives were eliminated.

The major challenges in red lesion detection for the algorithms discussed above are:

Segmentation of small microaneurysms in the areas of low image contrast; and

The presence of bright pathologies.

As bright lesions have sharp edges, small islands of normal retina are formed between them, when they lie close together. These can be picked up as false positives.

To solve these problems, a hybrid red lesion detection method is proposed. This method combines morphological based red lesion detection and candidate red lesion detection scheme based on matched filtering and local relative entropy.