Contact Lenses In An Iris Recognition System Biology Essay

Published:

Abstract - This paper presents an implementation of iris images with contact lens using hierarchical phase based matching - an image matching technique using phase components in 2D-DFT. The experimental result shows the usage of lenses in iris images could not be clearly identified by traditional edge detection algorithms. Though the probability of finding two identical irises is close to zero, the irises can be duplicated using the advanced technology in contact lenses. The technique of phase based image matching has so far successfully applied to high accuracy iris recognition tasks for bio-metrics. Experimental evaluation using CASIA iris image databases (version1.0 and 2.0) and Iris Challenge Evaluation 2005 database clearly demonstrates that the use of Fourier phase spectra of iris images makes it possible to achieve highly accurate recognition with the phase based image matching algorithm. Hence the proposed system attempts to implement the hierarchical phase based matching for iris with contact lenses

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

Keywords - iris recognition, phase based image matching, biometrics, hierarchical phase based matching.

INTRODUCTION

In the present world Iris Recognition is considered as the most accurate and reliable. The use of human iris as a biometric feature offers many advantages over other biometric features. Iris is the internal human body organ that is visible from outside, but well protected from external modifiers. Two eyes from the same individual, although are very similar, contain unique patterns.Even though there are many other recognition systems such as fingerprint recognition, Voice recognition and face recognition, one can easily hack all those recognition systems but it is quite tough to hack the iris recognition as Iris have different and unique pattern.

Therefore Iris Recognition is more suitable as it is more secure and more reliable. Before recognition of the Iris takes place the Iris is located using land mark feature. These land mark features and the distinct shape of the Iris allow for imaging feature isolation and extraction. Localization of the Iris is an important step in Iris recognition because if done improperly resultant noise (e.g. eye lashes, reflections and eyelids). In the image may lead to poor performance. Iris imaging request use of a high quality digital camera. Today's commercial Iris cameras typically use infrared light to illuminate the Iris without causing harm or discomfort to the subject. An iris image is typically captured using a noncontact imaging device, which is of great importance in practical applications.

The objective of the project is the implementation of iris recognition for iris images with contact lenses using hierarchical phase based matching - an image matching technique using phase components in 2D-DFT. The technique of phase based image matching has so far successfully applied to high accuracy iris recognition tasks for bio-metrics.

This paper is organized as follows. Section 2 describes the related methods available. Section 3 briefly describes the proposed methodology. Section 4 deals with the experimental results and problems. Section 5 includes the conclusion and future enhancement.

RELEVANT WORK

1. Iris Verification

In this section we describe the algorithms proposed for iris verification. An eye image is taken as input from which the iris is detected and converted into polar coordinates. The detected iris image contains noise due to the presence of eyelids and eyelashes. Masking is performed on the polar image to remove the noise. From the masked polar image, templates are generated which are further used for matching[3]. 1D log polar Gabor wavelet and Euler numbers are used to create the iris template and the Euler Code respectively. Next, Hamming Distance (HD) is used to match the iris templates and Directional Difference Matching (DDM) is used for matching Euler Codes. These matching algorithms give the matching scores MSIT for iris template and MSEC for Euler Code respectively. A decision strategy uses these matching scores to compute the acceptance or rejection threshold of the user.

Iris Detection: The first stage of iris recognition is the detection of pupil and the iris boundaries from the input eye image. It also involves preprocessing of the iris image to normalize the iris and make it scale invariant.

Detecting Pupil: To find the pupil, a linear threshold is applied on the eye image i.e. pixels with intensity less than a specified empirical value are converted to 0 (black) and pixels greater than or equal to the threshold are assigned 1 (white). Freeman's chain code algorithm is used to find regions of 8-connected pixels having the value 0. It is also possible that eyelashes may satisfy the threshold condition, but they have a much smaller area than the pupil. Using this knowledge, we can cycle through all regions and apply the Freeman's chain code algorithm to retrieve the black pupil in the image. From this region, the central moment is obtained . The edges of the pupil are found by creating two imaginary orthogonal lines passing through the centroid of the region. Starting from the center to both the extremities, boundaries of the binarized pupil are defined by the first pixel with intensity 1.

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

Finding Iris Boundaries: Next the edges of the iris are determined. The algorithm for finding the edges of the iris from eye image I(x, y) is as follows: 1.Center of pupil (Cpx, Cpy) and radius rp are known using the pupil detection algorithm. 2. Apply Linear Contrast Filter on image I(x, y) to get the linear contrasted image P(x, y) 3. Create vector A = {a1, a2,…, aw} that holds pixel intensities of the imaginary row passing through the center of the pupil, with w being the width of the image P(x, y). 4. Create vector R from the vector A which contains elements of A starting at the right fringe of the pupil and ending at the right most element of vector A. For each side of the pupil (vector R for the right side and vector L for the left side): a. Calculate the average window vector Avg = {b1,…,bn} where n = |L| or n = |R|. Vector Avg is subdivided into I windows of size z. The value of every element in the window is replaced by the mean value of that window. b. Locate the edge point for both the vectors L and R as the first increment in value of Avg that exceeds a threshold t. Thus, the pupil, the iris center, and the radius are calculated and a circle is drawn using these values to locate the pupil and iris edges.

Isolating Eyelids and Eyelashes: Eyelids and eyelashes are isolated from the detected iris image considering them as noise because they degrade the performance of the system. The eyelids are isolated by first fitting a line to the upper and lower eyelid using the linear Hough transform. A horizontal line is then drawn which intersects with the first line at the iris edge that is closest to the pupil. A second horizontal line allows the maximum isolation of eyelid regions. Canny edge detection is used to create the edge map, and only the horizontal gradient information is taken. If the maximum in Hough space is lower than a set threshold, then no line is fitted, since this corresponds to non-occluding eyelids. Also, the lines are restricted to lie exterior to the pupil region, and interior to the iris region. A similar process is followed for detecting eyelashes.

Generating Polar Iris Image and its Mask: After detecting the eyelids and the eyelashes, a mask based on the eyelids and eyelashes is used to cover the noisy area and extract the iris without noise. Image processing of the eye region is computationally expensive as the area of interest is of donut shape and grabbing the pixels in this region requires repeated rectangular to polar conversion. To simplify this, the iris is first unwrapped into a rectangular region and the pupils are also removed. Let (x, y) be any point on input image with respect to center of pupil, which lies between the inner and the outer boundaries of the iris. Let f(x, y) be the pixel value of point (x, y). Then the corresponding polar coordinates (r, θ) are r = x 2 + y 2 ,θ = tan−1 ( y / x) for θ ∈(−π ,π ] A mask for this polar iris image is generated using the masked iris and the process is similar to polar iris image generation.

2. Data Collection

A major hindrance to research in the field of iris recognition has been a shortage of publicly available images. With other biometrics such as face and fingerprints, there is access to thousands of images from various sources, but, until recently, the only readily available source of iris images has been the CASIA database [1]. Although this data set has proven to be invaluable, its lack of variety may have led to the design of somewhat biased systems. Our studies suggest that any algorithm optimized for an image-set consisting primarily of one type of iris, Asian, in this case, will be inherently biased toward a particular pattern and may not be effective when applied to a more diverse database. Recently, other iris image data sets have been assembled . At the time of writing, more than 800 classes have been collected, of which a subset of 150 were available for the experiments described here. The age, ethnicity, and gender breakdown of a superset gives an indication of the diversity of those members of a similar population who were willing to give this information. It is known that images of the human iris obtained with Near-Infrared (NIR) lighting are necessary to reveal complex textures for darkly pigmented irises, while lighter irises can be imaged either in the infrared or visible spectrum [1].

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

In collecting the Both database, eyes are imaged using an NIR sensitive high-resolution (1; 280 _ 1; 024) machine-vision camera with infrared lighting whose spectrum peaks around 820 nm. Daylight cut-off filters are used to eliminate reflections due to ambient visible light and care is taken to focus on the iris rather than on any other part of the eye such as eyelids or eyelashes. With the subject sitting and positioned against chin and forehead rests, the camera is manually positioned. A focal length of 35mm,with the lens 20cmfrom the eye, ensures that a large proportion of the image is that of the iris. The incoherent NIR light source is an array of LEDs close to the camera lens so that its reflections are within the boundary of the pupil with the subject looking into the lens. To avoid thermal injury, the power of infrared radiation in the range of 780 nm to 3 _m should be limited to less than 10 mW=cm2 according to US recommendations .Amore stringent regulation for lasers (coherent light), widely followed in Europe, suggests a more conservative 0:77mW=cm2. Measurements on our apparatus indicate that the power of the incoherent infrared radiation reaching the eye is less than 0:5mW=cm2. Due to the presence of ambient visible lighting, the pupil is partly constricted, thereby providing an additional safety mechanism.

Fig. 1 Images taken from a video-sequence of an eye illustrating the variations in the size of the reflected light source.

3. PROPOSED METHODOLOGY

1. Phase Based Matching Algorithm.

The key idea in this paper is to use phase-based image matching for the matching stage. Before discussing the details of the matching algorithm , this section introduces the principle of phase based image matching using the Phase-Only Correlation (POC) function.

Consider two N1 _ N2 images f(n1,n2) and G(n1,n2), where we assume that the index ranges are n1 = -M1….. M1 (M1>0) and n2 = -M2…..M2(M2>0) for mathematical Simplicity and hence N1 = 2M1 + 1 and and N2 = 2M2 + 2.

When two images are similar, their POC function gives a distinct sharp peak. When f°n1; n2Þ ¼ g°n1; n2Þ, the POC function rfg°n1; n2Þ becomes the Kronecker delta function _°n1; n2Þ. If two images are not similar, the peak value drops significantly. The height of the peak can be used as a good similarity measure for image matching and the location of the peak shows the translational displacement between the two images. In our previous work on fingerprint recognition , we proposed the idea of the Band-Limited POC (BLPOC) function for an efficient matching of fingerprints, considering the inherent frequency components of fingerprint images. Through a set of experiments, we have found that the same idea is also very effective for iris recognition. Our observation shows that the 2D DFT of a normalized iris image sometimes includes meaningless phase components in high-frequency domains and that the effective frequency band of the normalized iris image is wider in the k1 direction than in the k2 direction, The original POC function rfg(n1; n2) emphasizes the high-frequency components, which may have less reliability. This reduces the height of the correlation peak significantly, even if the given two iris images are captured from the same eye. On the other hand, the BLPOC function

alows us to evaluate the similarity by using the inherent frequency band of the iris texture.

Our observation shows that the 2D DFT of a normalized iris image sometimes includes meaningless phase components in high-frequency domains and that the effective frequency band of the normalized iris image is wider in the k1 direction than in the k2 direction, The original POC function rfg(n1; n2) emphasizes the high-frequency components, which may have less reliability. This reduces the height of the correlation peak significantly, even if the given two iris images are captured from the same eye. On the other hand, the BLPOC function

alows us to evaluate the similarity by using the inherent frequency band of the iris texture.

2. Hierarchical Phase Based Image Matching

The hierarchical matching is used in the matching score calculation step along with Phase Only Correlation (POC). In a hierarchical matching, consider an aligned iris image f (n1, n2). The POC function is calculated or the latent image f (n1, n2) and let the phase component get be θ1 and the matching score is evaluated with minimum two database images g (n1, n2) and h (n1, n2) hierarchically. If the phase component θ1 matches any database image either g (n1, n2) or h (n1, n2), then it will return the matching score value. Matching can be seen as traversing the tree structure of templates. The matching process starts at the root, the interest locations lie initially on a uniform grid over relevant regions in the image. The tree can be traversed in breadth first or depth first fashion In the proposed method, the top-to-bottom approach is used. The top-down sequence follows the nodes from the root to the leaf. Its principle functionality is indicated in the following code fragment:

01. Input_root_image(i)

02. for each L in leaf(i)

03. if f(L) =i

04. marknode_select(L)

05. return marknode_select(L) for matching score

calculation

06. if genuine matching

07. return matching score

08. back to leaf

09. else

10. top_down_evaluate(s)

Fig 2. Flow Diagram of Hierarchical Phase Based Matching.

The function Input_root_image gets the actual node as a parameter, which initially is the root of the search graph (line 1). Each leaf L of the current node is addressed in a loop (line 2) and for each L the formula f(L) is evaluated.If f(L) is true, the current leaf is marked as relevant (line 4). If f(L) fails, the recursion is continued until there is a node which fulfills the formula or until no

subsequent node is left (line 6).When applying this code fragment on the iris recognition, the Input_root_image get the latent iris image which is the root of the search graph. The loop foreach is used to go through the leaves, and then if a leaf node matches with the root node, it will be marked as relevant image and return the image for matching score calculation. Else if it fails to match in the matching score calculation it will return and the recursion will be continued until it matches. So in the place of matching with a single database image, two database images can be matched hierarchically with the input image. So by expanding the hierarchical images the speed will increase.

TESTS ON PETS DATASET

Iris recognition has been an active research topic in recent years due to its high accuracy. There is not any public iris database while there are many face and fingerprint databases. Lack of iris data for algorithm testing is a main obstacle to research on iris recognition. To promote the research, National Laboratory of Pattern Recognition (NLPR), Institute of Automation(IA), Chinese Academy of Sciences(CAS) will provide iris database freely for iris recognition researchers. CASIA Iris Image Database(ver 1.0) includes 756 iris images from 108 eyes (hence 108 classes). The slit lamp images are taken from the apparatus known as slit lamp which will be used in many research and development centre in the eye hospitals. In this type of apparatus there will be an infra red attached to the apparatus, infra red is very much useful because only when infra red is used the pupil in the eye gets darker which makes the pattern in the iris to be more visible which can be used in the experiments. Using this Slit-Lamp Apparatus we can get the iris images with the contact lens.

1. Edge Detection Algorithm.

There are many types of edge matching algorithm used in Matlab and the output is shown in the pictures below. The above edge detection algorithm are simulated in the different types of databases. The Sobel, Prewitt, Roberts, Canny, Isotropic, Laplacian edge detection techniques are simulated separately in all the three types of databases. i.e CASIA database, IIT database and the slit- lamp images and the outputs are noted down for further verification and the output is shown in Figure 3,Fig4 and Fig5.

2. Pre-Processing Implementation.

Iris Pre-Processing

Iris image preprocessing, including iris localization and iris image quality evaluation, is the key step in iris recognition and has a close relationship to the accuracy of matching. So far, there are many iris localization algorithms having been proposed. In this paper, we propose a new iris localization algorithm, in which we adopt edge points detecting and curve fitting. After this, we set an integral iris image quality evaluation

Fig 3 Edge detection Output for CASIA Database Images.

Fig 4 Edge detection Output for IIT Database Images.

Fig 5 Edge detection Output for Slit-Lamp Images.

system that is necessary in the automatic iris recognition system. All the procedures of the algorithm are proved to be valid through our experiment in the databases.

B. Pupil Boundary Detection.

To find the pupil, a linear threshold is applied on the eye image i.e. pixels with intensity less than a specified empirical value are converted to 0 (black) and pixels greater than or equal to the threshold are assigned 1 (white). Freeman's chain code algorithm is used to find regions of 8-connected pixels having the value 0. Using this knowledge, we can cycle through all regions and apply the Freeman's chain code algorithm to retrieve the black pupil in the image. From this region, the central moment is obtained . The edges of the pupil are found by creating two imaginary orthogonal lines passing through the centroid of the region. Starting from the center to both the extremities, boundaries of the binarized pupil are defined by the first pixel with intensity 1. The Output is shown in Fig 6.

Fig 6 Pupil Boundary Detection.

C. Iris Boundary Detection.

Next the edges of the iris are determined as shown in Fig 7. The algorithm for finding the edges of the iris from eye image I(x, y) is as follows: 1.Center of pupil (Cpx, Cpy) and radius rp are known using the pupil detection algorithm. 2. Apply Linear Contrast Filter on image I(x, y) to get the linear contrasted image P(x, y) 3. Create vector A = {a1, a2,…, aw} that holds pixel intensities of the imaginary row passing through the center of the pupil, with w being the width of the image P(x, y). 4. Create vector R from the vector A which contains elements of A starting at the right fringe of the pupil and ending at the right most element of vector A. For each side of the pupil (vector R for the right side and vector L for the left side): a. Calculate the average window vector Avg = {b1,…,bn} where n = |L| or n = |R|. Vector Avg is subdivided into I windows of size z. The value of every element in the window is replaced by the mean value of that window. b. Locate the edge point for both the vectors L and R as the first increment in value of Avg that exceeds a threshold t. Thus, the pupil, the iris center, and the radius are calculated and a circle is drawn using these values to locate the pupil and iris edges.

Fig 7 Iris Boundary Detection.

D. Feature Extraction.

Given a pair of normalized iris images fnorm°n1; n2Þ and gnorm°n1; n2Þ to be compared, the purpose of this process is to extract effective regions of the same size from the two images, as illustrated in Fig. 8. Let the size of two images fnorm°n1; n2Þ and gnorm°n1; n2Þ be N1 _ N2 and let the heights of irrelevant regions in fnorm°n1; n2Þ and gnorm°n1; n2Þ be hf and hg, respectively. We obtain effective images feff °n1; n2Þ and geff °n1; n2Þ by extracting effective regions of size N1 _ fN2 _ max°hf; hgÞg. We eliminate irrelevant regions such as a masked eyelid and specular reflections. On the other hand, a problem may occur when most of the normalized iris image is covered by the eyelid. In such a case, the extracted region becomes too small to perform image matching. To solve this problem, we extract multiple effective subregions from each iris image by changing the height parameter h. In our experiments, we extract six subregions from a single iris image by changing the parameter h as 55, 75, and 95 pixels. Our experimental observation shows that the recognition performance of the proposed algorithm is not sensitive to these values. Thus, we do not perform optimization for the parameter h.

Fig 7 Normalized Image.

CONCLUSION

In the first stage of Iris Recognition edge detection algorithms are used to find the edges in the CASIA database, IIT Database and Slit-Lamp images. In the next stage preprocessing and matching are the main modules and hierarchical phase based image matching is performed in two different ways, Multiple Sub-region Method and Block Partition Method.. In the preprocessing module, we determine the iris region in the original image, and then use edge detection and Hough transform to exactly compute the parameters for Localization.The future work will be targeted to avoid the data loss while extracting the feature and to improve the efficiency of the system in identifying fake contact lenses. The system can be made efficient in every aspect to improve its performance.