Recognition Of Iris Patterns Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Iris Recognition is a method of biometrics authentication that uses pattern-recognition techniques based on high-resolution images of the iris of an individual's eyes.

Iris recognition is composed of various stages which are locating and identifying the iris, feature extraction from the images and lastly, matching them from an image database. This report will discuss all of the stages of the iris recognition in detail. Besides that, various methods which is used for iris recognition such as Hough Transform, Rubber Sheet Model, Gabor Filters and Hamming distance will also be discussed in detail together with their advantages and disadvantages.

In this project, an iris recognition program will be developed using MATLAB to verify the recognition performance and accuracy rates as a biometric.


I would like thank my project supervisor, Professor Minyue Fu for his patience and guidance to help me address key concerns and doubts for the project throughout the one year.

I would like to thank my project supervisor, Dr James Welsh, and my local supervisor, Mr. Lee Chin Kang, for their help and advice in addressing the key areas of concern for the project. Both of them have guided me on track to complete the project and meeting the key objectives. Words of encouragement from the supervisors were deeply appreciated.

Table of Contents



As technology develops in the growing world today, needs for accurate and reliable authentication due to raising identification frauds and security efficiency increases. A good biometric system that provides an automated recognition based on unique biological features of the individual human body characteristics helps to address this problem. Examples of this system include facial features, fingerprints, iris and voice. Iris recognition discussed in this work uses pattern-recognition techniques based on high-resolution images of the irides of an individual's eyes.

The formation of the iris begins during pre-natal growth and remains unchanged with age. The probability of having 2 irides alike in this world is almost zero and this theory applies even so for identical twins or between the left and right eye of an individual as the iris pattern variability among different individuals is structurally distinct and unique. With these characteristics, iris recognition has proved to offer an ideal biometric for authentication.

To perform Iris Recognition, image processing techniques is applied to extract the iris pattern from an eye image. The iris pattern is processed and details, excluding unnecessary elements like eyelids, eyelash, reflections or noise is then stored in a database. These stored images and data is then used for comparison against a new image when available.

This report further explains the methods of how an iris image is detected using the various algorithms and implementation results of an image processing and recognition program.


The aims and objective of this project is to develop an "Image Processing and Pattern Recognition Software" which can be trained to accomplish the iris recognition task through a set of codes. The software will be required to perform accurate and efficient matching of iris images and recognize an individual's eyes from an untagged image against a database of tagged images.

A detailed research to understanding how an iris recognition system works with the various algorithms involved in each recognition stage will need to be done based on the outline described below to achieve a good recognition program.

The iris recognition system is composed of various stages and these stages include methods on how to locate the location of the iris, identifying the iris patterns of an iris, feature extraction and matching of the unique iris patterns in a stored database based on mathematical representations.

In the first stage, segmentation to localize the iris region for feature extraction and occlusion of surrounding noises was effectively achieved through the use of Hough transform. Edge, circle and line detection methods each contribute its part to success of an iris recognition system.

Normalisation is applied after segmentation to transform the iris region into a fixed template in order to allow similarity comparison between different images. During this stage, compensation and distortion reduction through the use of the virtual circles for varying images from the same iris caused by camera distance and iris orientation is processed.

Next, encoding is carried out to extract the distinct information from the iris patterns for comparisons between two images made. Data is generated through the use of Gaussia filtering to match the relationship and similarity between two iris data.

Lastly, matching is done between two iris image consisting of a newly acquired image and an existing image stored in the database. Hamming distance is employed for this stage to evaluate and analyze the similarity and determine if the images belong to the same iris.


The key contributions in this project includes:-

Research on how existing iris recognition works

Research on the various methods for Segmentation

Research on the various methods for Normalisation

Research on the various methods for Feature Encoding

Research on the various methods for Database Matching

Familarise with the use of Matlab® Image Processing Toolbox

Programming the various functions required in Matlab® for the Segmentation step to localise and isolate the iris region based on key parameters detected and calculated.

Programming the various functions

Design a user friendly graphic user interface for the system




Localization of the iris region of interest to prepare for feature extraction in an eye image is described as the first stage to an iris recognition system.

The iris, pupil and eyelid boundaries detected can be approximated by circles and parabolas using various segmentation methods to correctly identify the parameters of the center coordinates and radius of both the iris and pupil region shown in Fig (1).In an eye image, noises such as eyelids and eyelashes, are considered as noises and corrupt iris patterns resulting in poor recognition rates. These data corrupting factor needs to be isolated and removed in this stage to avoid data inaccuracy when performing matching in the final stage.

Figure (1) Segmented eye image generated from MATLAB


In [1], the Integro Differential Operator first searches for the estimated parameters of the iris, pupil and eyelid boundary respectively in given equation:


I(x, y) : Original Eye Image

r : Iris/Pupil circle radius

(, ) : Circle center coordinates

(r) : Gaussian smoothing function of scale

This operator then scans iteratively over the entire image to search for the maximum contour integral derivative through (r, , ) to define a contour integration path together with a smoothing function to localise the iris region effectively.

To detect the eyelid boundary, the contour integration path in (1) is changed from circular to arc which localises both the upper and lower boundary. Detection results of images with less than 50% iris visibility are considered invalid since this could result in poorly segmented images that do not offer accurate results when performing matching in the final stage.


In theory, canny edge detector is known as the most optimal edge detector that uses multi-stage algorithms which gives accurate detection and localisation of the real edges. This method also gives minimal response to false edges and real edges which have already been marked.

In [2], the function describing canny edge detector can be approximated and smoothed by using Gaussian filters to reduce noise pixels. Next, the local gradient and edge direction angle at each point is computed with the following functions:




: Derivative of the vertical direction

: Derivative of the horizontal direction

The edge points detected give rise to the ridges and points not detected is set to zero pixels in the gradient magnitude image to map a thin line in the output image. To be able to detect strong and weak edges, two threshold values, T1 (lower limit) and T2 (Upper limit) is selected. For T1 < T2, values larger than T2 are considered strong edges and values between T1 and T2 are weak edges and this range value can be computed automatically by MATLAB if it is not defined. False information may arise when threshold values are set too high or too low.

The image edges are detected by taking the input of an image and producing an edge map as shown in Fig. (2).

(a)Original EyeImage.jpg (b)Canny edge detected image.jpg

Figure (2), (a) Original eye image from CASIA database (b) MATLAB generated Canny Edge map


The Hough transform is an effective method by using an accumulator to detect the geometrical shapes, such as circles or straight lines found in an image which is tolerant to boundary gaps and unaffected by image noise.

An algorithm based on the Hough transform was implemented in [3] to locate the radius and center coordinates of both the iris and pupil regions based on the multiple pixel point measured in the image. A binary edge image map is first generated using the image intensity information from the original raw unprocessed image. Next, the parameter values of the radius and circle center coordinate is represented via edge point voting with a maximizing parameter set as an array that is indexed by discretised values:

= (4)


: Circle center coordinates in the horizontal direction

: Circle center coordinates in the vertical direction

: Circle radius

To detect the contours of the upper and lower eyelids in an eye image, Hough transform using parameterised parabolic arcs are applied instead of circle parameterisation.

In order for a successful Hough transform implementation, a good quality of input image data needs to be used in order to avoid critical edges being removed resulting in poor recognition rates. Filtering methods which helps to reduce noise and preserve edges is also performed prior edge detection and Hough transform to improve the overall efficiency of the system.



Normalisation is applied after segmentation to transform the iris or pupil region into a fixed dimension to allow for template matching in future stages. Different eye images can be captured from the same iris and one of the reasons for this variation is caused by the human pupil dilation due to luminance variations at the point when the iris image is captured. Some other reasons include the change in camera to eye distance, camera rotation and camera zoom.

To achieve accurate recognition rates, it is necessary to compensate for such inconsistency and reduce iris distortion to a minimal. In this stage, normalisation is processed to maintain the same constant dimensions and identify specific matching coordinate pointes for multiple images of the same iris taken under various attributes without affecting the distinct and unique feature of an iris.


Image Registration in Iris recognition [3] refers to the transformation of the original coordinates of a raw unprocessed image and aligning it to the reference image in the database. A mapping function based on the classification of the intensity values to reduce the two image differences is derived:



: Raw unprocessed image

: Reference database image

: Image coordinates of the raw image.

and the transformation of the image coordinates is given by:



: Transformed image coordinates

S : Scaling factor

R() : Matrix rotation

This method provides a close correlation between the transformed raw image and the reference database image which identifies a specific matching coordinate point of the distinct feature in the iris to generate a template.


The rubber sheet model introduced in [4] and [5] transforms the segmented iris region with Cartesian coordinates into an equivalent rectangular representation for all new images obtained by assigning each pixel point a pair of polar coordinates illustrated in Figure (3).

This remapping of the original iris image Cartesian coordinates, into polar coordinates, can be represented by:






r : Diameter measured between the iris and pupil of unit interval from 0 to 1

: Circle angle from 0 to

, : Linear combination sets of corresponding normalised coordinates

: Pupil coordinates along the direction

: Iris coordinates along the direction

In this method, the image variation resulting from the various factors, such as pupil dilation and changes in pupil size are compensated when taking the pupil center as the reference point to transform the iris region into a rectangular domain with constant dimensions.

Figure (3) Iris region is unwrapped into a rectangular representation


r : Diameter measured between the iris and pupil of unit interval from 0 to 1 : Circle angle from 0 to


The normalisation method introduced in [6] combines a linear and a non-linear transformation to unwrap the iris region through a predefined annular reference point of the pupil margin and iris root (i.e. the inner and outer iris boundary).

Assumptions based on the following are made:-

The pupil margin and iris root are concentric circles where the radius of the two circles are r and R respectively.

There is no pupil margin rotation when the pupil size changes.

The pupil shape remains circular when the pupil size changes.

A virtual arc () is created based on the changed pupil size to a fixed dimension using a obtained by computing the mean of all () in the database.

The pupil radius is scaled to a fixed radial resolution disregarding the iris size to linearly map the entire region of interest with a fixed dimension of by equally sampling the dashed line of the sampling circle in Fig (4).

Figure (4) Non linear Normalisation Model


Another method used to perform normalisation without having to transform from the Cartesian coordinates to Polar Coordinates can be found in [7]. In order to compensate for the image variations found in different images of the same iris, it is necessary to perform image up sampling to scale the iris diameter for all segmented images to a constant diameter.

When comparing two images, the ratio difference of the diameter is calculated to construct virtual circles in order to extract the unique iris features. A normalisation value N (power-of-two integer) is selected to ensure that the virtual circles are normalized to contain the same number of data points for every template created. This value is crucial in determining the accuracy of the overall system.



Information containing features of the iris from the normalised iris patterns is extracted in order for only unique information existing in the iris region to be encoded for comparison between two iris templates.

A specific feature extraction algorithm will need to be defined based on the highly distinct data an iris region of an individual provides. The result generated is then put across an equivalent matching function to compute for relationship and similarity to determine the relationship between them.


The encoding of the iris pattern based on Gabor filter by J. Daugman [1] provide a good representation of the relationship between a space signal and spatial frequency that demodulates each iris pattern to extract its phase information using a quadrature pair of Gabor wavelets.

Gabor filters are formed by varying the signals with a Gaussian function to provide frequency localization. The phasor coordinates of the complex coefficients are computed and evaluated based on their location on the plane as shown in Figure (2.4).

Figure (2.4), Phase Quadrant code


In Lim et al [7], Haar wavelets are thought of as the mother wavelets, Figure (2.5) which obtain the feature vectors from the iris region. In Figure (2.6), H refers to the high-pass filter and L refers to the low-pass filter. Based from multi-dimensionally filtering, a resulting feature vector of 87 dimensions is computed and contains a real-value between -1.0 to +1.0. The vector is quantized into binary values by replacing positive values with '1' and negative values with '0' in order for an iris template to contain only 87bits.

A test completed by Lim et al. proved Haar wavelet with a better recognition rate when compared with Gabor filtering. The results can be seen in Table (2.1) with an improvement efficiency of 0.9% for data learning and 2.1% for data test.

Haar Wavelet

Gabor Filtering

Data Learning



Data Test



Table (2.1) Comparison between Haar Wavelet and Gabor Filtering.

Figure (2.5) Haar Wavelet Figure (2.5) Feature Vector


Wildes [3] makes use of isotropic bandpass decomposition and encodes the iris region by using Laplacian Gaussian filters. This filter can be specified as in Equation (2.7). The laplacian pyramid which is a representation of the filtered image is able to compact this data and save only the significant data required.


Where represents the radial distance of a point from the filter center and represents the standard deviation of the Gaussian.



The final step to an iris recognition system after localizing and extracting the unique features of the iris region in an image, the next task involves the process of having to decide if the newly acquired iris template matches any stored iris template from the image database.

This decision environment relies on an algorithm which evaluates and analyses the similarity between two normalised feature encoded iris templates based on a judgment criterion. The judgment criterion is predefined into two groups, where a set of value range computed from the matching algorithm refers to templates of the same iris and another set of value range for templates generated from different iris.


Hamming Distance is a measure of the similarity and difference between two sets of equal bit length symbols. Based on the distance results, it determines if any two comparing bit templates were generated from the same iris or from different iris.

This method is discussed in [8] and described as the test of statistical independence which is computed by applying Boolean operators to 2048 bit phase vectors of comparing bit templates. This is done by taking the Exclusive-OR, XOR) to detect the fraction of disagreement bits between two iris templates and AND'ed () with the masking bits to determine if any iris region is occluded by noise such as, eyelids or reflections.

The Hamming Distance equation is given as:



Code A, Code B : phase code bit vector of two iris bit template

Mask A, Mask B : mask bit vector of two iris bit template

A positive identification is made when the Hamming distance falls below the decision threshold value predefined and otherwise.


Normalised correlation used in [3] between the newly acquired template and database template can be defined as:






: Image template 1 of size ()

: Image template 2 of size ()

: mean with reference to

: Standard deviation with reference to

This method compares the intensity brightness matrix between two templates where the larger matrix is considered as the base template. The mean of the smaller template is subtracted from the base template, while dividing by the standard deviation to give the correlation coefficient result and based on a decision threshold to determine if the comparing templates are from the same iris or from different iris.


The entire project is implemented by using the functions provided from the Image Processing Toolbox available in MATLAB® [10] and the images used to perform Iris Recognition are provided by CASIA [9].



In the segmentation stage, several methods discussed are combined to generate a set of segmentation data required in order to achieve an efficient iris recognition system. A flowchart of this process is illustrated in Flowchart (1).


Flowchart (1): Process flow of Segmentation

To start, the raw iris image is first filtered through a median filter to help reduce speckle noise by computing the median value of the image matrix. The system then generates an edge map in by using a Canny Edge Detector with the local gradient and edge point computed automatically when using the edge function:

BW = edge(I,'canny',thresh,sigma) (1.1)

The low and high threshold values used is selected automatically by the edge function based on the gradient magnitude to detect the strong and weak edges in the image and in this project, a higher sigma value to help smoothen and reduce noise that distorts the image data.

To detect the iris/pupil region in the image, the range of radius values was pre set manually based on the information provided by CASIA, with the pupil and iris radius range from 28 to 75 pixels and 90 to 150 pixels respectively.

The Hough transform is next performed to find the iris boundary within the iris image. A vector containing all the circle circumference points from polar to Cartesian coordinates is first computed based on all possible iris radius ranges to detect the actual parameters of the iris center coordinates and radius. The binary image is then cropped to the size of the iris image to perform detection of the pupil parameters since the pupil is always located in the iris region and therefore, processing only this region would help reduce the processing time required to perform search on the entire image. Once all the above is completed, the generated parameters of the center coordinates and radius of the iris and pupil region is stored in a matrix named 'iris_parms' and 'pupil_parms' respectively.

With the detected parameters, the iris and pupil boundary is next drawn on the iris image using the following method in Figure ( ):




x, y : x and y coordinate point on the circle circumference

a,b : Center coordinate point

r : Radius of circle

: Defined range of points from 0 to 2

(a) (b) test.jpg

Figure ( ): (a) Method discussed in (15,16), (b) iris and boundary detected in image

The figure ( ) below shows the generated image of the steps discussed above

1a1.bmp(b) 1a1.bmp-3.bmp(c)1a1.bmp-4.bmp

Figure ( ): (a) Original Eye Image, (b) Canny Edge detected map, (c) Cropped Iris region


The segmentation results of the iris images from CASIA provided good segmentation results since these images were taken for the iris recognition research purpose.

Based on the experimental results, a total of 100 out of 120 eye images were correctly segmented to give a success rate of ~ 83%. The images which failed segmented images was due to very noisy pixels (Example, eyelashes covering half the iris region) which distorted the image data resulting in accurate detection of the iris and pupil parameters.

Include images of a lousy image quality iris image




In the segmentation stage, several methods discussed are combined to generate a set of segmentation data


In the segmentation stage, several methods discussed are combined to generate a set of segmentation



In the segmentation stage, several methods discussed are combined to generate a set of segmentation data


In the segmentation stage, several methods discussed are combined to generate a set of segmentation data



The plan laid out for the next part of this project is as follows:-

Improve on the efficiency of hough transform as the system is taking (~1min) to process Enrollment and Iris Recognition

Improve on the Matching Algorithm used as the current method used is still not able to accurately identify and match all images from the database

Include more features in the Graphic User Interface for a more user friendly system