Biometric Recognition And Security Computer Science Essay

Published:

In this world privacy is very tough. One's identity is not too safe and used by many other criminals and performed many illegal activities. To protect from all these stuff there should be a identity which should be safe and secure. We have seen many other Bio - Metrics but all have some other fault with it. This iris eye pattern is the best of all as this would be same in one's life time. It is not going to change at instance. This technology is more advantageous than finger, face and all other technologies. This provides the best confidence to all people. The researchers state that this technology is better than DNA testing also. This states that the IRIS is a best password which we no need to remember any thing nor carry anything which would be with us every time.

Recognition of human beings has been the main criteria. As we are facing the same problem in all the pattern recognition issues but we don't have any type of problem with this issue because iris cannot be changed nor robbed from some one. Even the iris will not be same for two different persons. For example in face recognition, difficulties arise from the fact that the face is a changeable social organ displaying a variety of expressions, as well as being an active 3D object whose image varies with viewing angle, pose, illumination, accoutrements, and age. It has been shown that for facial images taken at least one year apart, even the best current algorithms have error rates of 43% (Phillips et al. 2000) to 50% (Pentland et al. 2000). Against this intra-class (same face) variability, inter-class variability is limited because different faces possess the same basic set of features, in the same canonical geometry.

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

Monochromatically at a distance of about 35 cm the outline overlay shows results of the iris and pupil localizationand eyelid detection steps. The bit stream in the top left is the result of demodulation with complex-valued 2D Gabor wavelets to encode the phase sequence of the iris pattern. For all of these reasons, For all these issues, i.e., the iris patterns will be interesting as an alternate approach for the visual recognition system that to when large databases are to be searched to find out the match. Although small (11 mm) and sometimes problematic to image, the mathematical advantage and its pattern is enormous.

Figure: Example of an IRIS Patterned Image

In addition, to this the organ iris is protected internally and externally from the atmosphere. The pattern recognition has many issues for localizing eyes in faces. And finally the localization of the iris in the eye is size - invariant representation.

Iris Structure:

Image Acquisition

Image Localization

Pattern Matching

Image Acquisition:

One of the major challenges of automated iris recognition is to capture high-quality image of the iris while remaining noninvasive to the human operator. Given that the iris is a relatively small (typically about 1 cm in diameter), dark object and that human operators are very sensitive about their eyes, this matter requires careful engineering. Several points are of particular concern. First, it is desirable to acquire images of the iris with sufficient resolution and sharpness to support recognition. Second, it is important to have good contrast in the interior iris pattern without resorting to a level of illumination that annoys the operator, i.e., adequate intensity of source (W/cm ) constrained by operator comfort with brightness (W/sr-cm ). Third, these images must be well framed (i.e., centered) without unduly constraining the operator (i.e., preferably without requiring the operator to employ an eye piece, chin rest, or other contact positioning that would be invasive). Further, as an integral part of this process, artifacts in the acquire images (e.g., due to specular reflections, optical aberrations etc.) should be eliminated as much as possible.

To get very good details of the iris patterns, the image resolved should give a minimum of 70 pixels in the radius of an iris image. Monochrome ccd cameras are used for imaging so that it would be invisible to the persons. There would be many different proposes for getting facial features like eyes, and we don't discuss this paper. Main assessment was done in the real-time by getting the high frequency power in the 2D Fourier Spectrum from each and every frame and gives an audio device to detect the exact range appropriately.

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

Image Localization:

Without placing undue constraints on the human operator, image acquisition of the iris cannot be expected to yield an image containing only the iris. Rather, image acquisition will capture the iris as part of a larger image that also contains data derived from the immediately surrounding eye region. Therefore, prior to performing iris pattern matching, it is important to localize that portion of the acquired image that corresponds to an iris. In particular, it is necessary to localize that portion of the image derived from inside the limbus (the border between the sclera and the iris) and outside the pupil. Further, if the eyelids are occluding part of the iris, then only that portion of the image below the upper eyelid and above the lower eyelid should be included. Typically, the limbic boundary is imaged with high contrast, owing to the sharp change in eye pigmentation that it marks. The upper and lower portions of this boundary, however, can be occluded by the eyelids. The papillary boundary can be far less well defined. The image contrast between a heavily pigmented iris and its pupil can be quite small. Further, while the pupil typically is darker than the iris, the reverse relationship can hold in cases of cataract: the clouded lens leads to a significant amount of backscattered light. Like the pupillary boundary, eyelid Contrast can be quite variable depending on the relative pigmentation in the skin and the iris. The eyelid boundary also can be irregular due to the presence of eyelashes. Take in tandem; these observations suggest that iris localization

must be sensitive to a wide range of edge contrasts, robust to irregular borders, and capable of dealing with variable occlusion.

The total operator acts in effect as round edge detector, blurred and set to, which searches for maximum derivative with increasing radius at successive scales. The operator both the papillary boundary and the boundary of iris, even though the first search for limbus boundary, even though the first search for limbos incorporates evidence to improve its robustness.

The phase demodulation process used to encode iris patterns. Local regions of an iris are projected onto quadrature 2D Gabor wavelets, generating complex-valued coefficients whose real and imaginary parts specify the coordinates of a phasor in the complex plane. The angle of each phasor is quantized to one of the four quadrants, setting two bits of phase information. This process is repeated all across the iris with many wavelet sizes, frequencies, and orientations, to extract 2,048 bits.

Pattern Recognition:

Having localized the region of an acquired image that corresponds to the iris, the final task is to decide if this pattern matches a previously stored iris pattern. This matter of pattern matching can be decomposed into four parts:

1) bringing the newly acquired iris pattern into spatial

alignment with a candidate data base entry;

2) choosing a representation of the aligned iris patternsthat makes their distinctive patterns apparent;

3) evaluating the goodness of match between the newlyacquired and data base representations;

4) Deciding if the newly acquired data and the data baseentry were derived from the same iris based on the goodness of match.

1) Alignment: To make a detailed comparison between two images, advantageous to establish a precise correspondence between characteristic structures across the pair. Both of the systems under discussion compensate for image shift, scaling, and rotation. Given the systems' ability to aid operators in accurate self-positioning, these have proven to be the key degrees of freedom that required compensation. Shift accounts for offsets of the eye in the plane parallel to the camera's sensor array. Scale accounts for offsets along the camera's optical axis. Rotation accounts for deviation in angular position about the optical axis. Nominally, pupil dilation is not a critical issue for the current systems

since their constant controlled illumination should bring the pupil of an individual to the same size across trials (barring illness, etc.). For both systems, iris localization is charged with isolating an iris in a larger acquired image.

2) Representation: The distinctive spatial characteristics of the human iris are manifest at a variety of scales. For example, distinguishing structures range from the overall shape of the iris to the distribution of tiny crypts and detailed texture. To capture this range of spatial detail, it is advantageous to make use of a multiscale representation. Both of the iris-recognition systems under discussion make use of bandpass image decompositions to avail themselves of multiscale information. The Daugman system makes use of a decomposition derived from application of a two dimensional version of Gabor filters to the image data.

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

3) Goodness of Match: Given the systems' controlled image acquisitions and abilities to bring data base entry and newly acquired data into precise alignment, an appropriate match metric can be based on direct point-wise comparisons between primitives in the corresponding representations. The Daugman system quantifies this matter by computing the percentage of mismatched bits between a pair of iris representations, i.e., the normalized Hamming distance.

4) Decision: The final task that must be performed for current purposes is to evaluate the goodness-of-match values into a final judgment as to whether the acquired data does (authentic) or does not (imposter) come from the same iris as does the data base entry. For the Daugman system, this amounts to choosing a separation point in the space of (normalized) Hamming distances between iris representations. Distances smaller than the separation point will be taken as indicative of authentics; those larger will be taken as indicative of imposters.3 An appeal to statistical decision theory is made to provide a principled approach to selecting the separation point. There, given distributions for the two events to be distinguished (i.e.,authentic versus imposter), the optimal decision strategy is defined by taking the separation as the point at which the two distributions cross over. This decision strategy is optimal in the sense that it leads to equal probability of false accept and false reject errors. (Of course, even with a theoretically "optimal" decision point in hand, one is free to choose either a more conservative or more liberal criterion according to the needs of a given installation.) In order to calculate the cross-over point, sample populations of imposters and authentics were each fit with parametrically defined distributions. This was necessary since no data, i.e., Hamming distances, were observed in the cross-over region.

System and Performance:

In contrast to systems up to now that used ID cards and passwords, systems that use iris recognition reference each individual's unique iris code, so it is possible to maintain an extremely high level of security. Since it is also possible to basically check one's identity without making contact, there are no unpleasant feelings like there are when giving fingerprints. Figure 4 illustrates the system configuration. Devices such as a gate device that refers to data and a management device that registers data comprise this system. The management device encodes the iris image obtained by the connected iris register, registers it in the database along with personal information (name, affiliation, etc.) and then manages the information. The gate device encodes the iris image obtained from the iris reference device then checks it against the iris code registered in the database.

Users of this system register their iris in the management device in advance. Including the time required for the device operator to perform the registration task, then entire procedure takes about 3 minutes. The user stands in front of the iris reference device, then looks at the LCD screen on the iris reference device. Iris referencing is performed while the user looks at the image of the user's eye on the LCD screen. The iris pattern is obtained by the gate device, then the user's identity is confirmed. Since iris referencing can be performed while the user observes an image of the user's own eye displayed on the LCD screen, it is possible to obtain the best iris pattern. With a reference period of less than 1 second, it is possible to realize highly precise identification, with a mis-identification rate of less than 0.001%.

The image-acquisition, iris-localization, and pattern matching components developed by Daugman and Wildes et] have been assembled into prototype iris-recognition systems. Both of these systems have been awarded U.S. patents Further, both systems have been the subject of preliminary empirical evaluation. In this section, the system and performance aspects of the two approaches are described. The Daugman iris-recognition system consists of an image-acquisition rig (standard video camera, lens, frame grabber, LED illuminator and miniature video display for operator positioning) interfaced to a standard computer workstation). The image-analysis software for the system has been implemented in optimized integer code. The system is capable of three functional modes of operation: enrollment, verification, and identification. In enrollment mode, an image of an operator is captured and a corresponding data base entry is created and stored. In verification mode, an image of an operator is acquired and is evaluated relative to a specified data base entry. In identification mode, an image is acquired and evaluated relative to the entire data base via sequential comparisons. Both the enrollment and verification modes take under 1 s to complete. The identification mode ca evaluate against a data base of up to 4000 entries in the same amount of time.

A commercial version of this system also is available through IriScan This version embodies largely the same approach, albeit with further optimization and special-purpose hardware for a more compact instantiation.

Security Considerations:

Methods that have been suggested to provide some defense against the use of fake eyes and irises include:

Changing ambient lighting during the identification (switching on a bright lamp), such that the pupillary reflex can be verified and the iris image be recorded at several different pupil diameters

Analyzing the 2D spatial frequency spectrum of the iris image for the peaks caused by the printer dither patterns found on commercially available fake-iris contact lenses

Analyzing the temporal frequency spectrum of the image for the peaks caused by computer displays

Using spectral analysis instead of merely monochromatic cameras to distinguish iris tissue from other material

Observing the characteristic natural movement of an eyeball (measuring nystagmus, tracking eye while text is read, etc.)

Testing for coaxial retinal back-reflection ("red-eye" effect)

Testing for reflections from the eye's four optical surfaces (front and back of both cornea and lens) to verify their presence, position and shape using 3D imaging (e.g., stereo cameras) to verify the position and shape of the iris relative to other eye features

Digital Watermarking:

What is Watermarking?

Watermarking is the art of hiding text or a binary pattern called watermarks in a given digital image. This technique is also called Steganography.

Watermarking using image processing techniques?

The core idea behind this work is to process a given digital image and extract certain salient features like contours, skeleton and corners and load watermark bits there so that the image after embedding watermarks does not appear to be different to normal eyes.

Digital watermarking is relatively a new way of protecting intellectual property and ownership rights of certain digital data, be it an image or an article or music. A digital watermark is essentially a digital pattern inserted into a data, and it is treated as a digital signature.

In this era of World Wide Web, when electronic commerce and information interchange have caused complex cyber traffic, the issue of copyright laws and plagiarism has become a major concern among the intellectuals who create work via web. Moreover, communication of sensitive information between government offices via satellite or telephone lines is also facing the danger of being intercepted by unauthorized persons which may lead to threat to the security of the country as a whole.

So, security measures like watermarking of documents and digital data, digital certification, steganographical measures have become an inevitable component in the field of communication.This thesis addresses one such problem and provides a basic feasible solution to the problem.

Digital Watermarking:

The purpose of digital watermarks is to provide copyright protection for intellectual property that is in digital format.

As seen above, Alice creates an original image and watermarks it before passing it to Bob. If Bob claims the image and sells copies to other people Alice can extract her watermark from the image proving her copyright to it. The caveat here is that Alice will only be able to prove her copyright of the image if Bob hasn't managed to modify the image such that the watermark is damaged enough to be undetectable or added his own watermark such that it is impossible to discover which watermark was embedded first.

Matlab Code:

IMALTER: ( Image Altering )

function alt=alterim(i1,i2)

for x=1:size(i1,1)

for y=1:size(i1,2)

if(mod((x+y),2)==0)

alt(x,y,:)=i1(x,y,:);

else

alt(x,y,:)=i2(x,y,:);

end

end

end

EMBEDBP: ( Embedding the Original image )

function [intl] = embedbp(I,txt,b)

if nargin == 2

b=1;

end

N = 8*numel(txt);

S = numel(I);

if N > S

warning('Text truncated to be within size of image')

txt = txt(1:floor(S/8));

N = 8*numel(txt);

end

p = 2^b;

h = 2^(b-1);

I1 = reshape(I,1,S);

addl = S-N;

dim = size(I);

I2 = round(abs(I1(1:N)));

si = sign(I1(1:N));

for k = 1:N

if si(k) == 0

si(k) = 1;

end

I2(k) = round(I2(k));

if mod((I2(k)),p) >= h

I2(k) = I2(k) - h;

end

end

bt = dec2bin(txt,8);

bint = reshape(bt,1,N);

d = h*48;

bi = (h*bint) - d;

I3 = double(I2) + bi;

binadd = [bi zeros(1,addl)];

I4 = double(si).*double(I3);

I5 = [I4 I1(N+1:S)];

intl = reshape(I5,dim);

return

IMHIDE: ( Hiding the Image )

function hide= imhide(org,text)

hide=zeros(size(org,1),size(org,2));

for i=1:size(org,1)

for j=1:size(org,2)

if text(i,j)<=128

hide(i,j)=bitset(org(i,j),1,0);

else

hide(i,j)=bitset(org(i,j),1);

end

end

end

IMEXTRACT: ( Extracting the Image from the text )

function [tx1 tx2]= imxtract(diim)

tx1=zeros(size(diim,1),size(diim,2));

tx2=zeros(size(diim,1),size(diim,2));

for x=1:size(diim,1)

for y=1:size(diim,2)

if(mod((x+y),2)==0)

tx1(x,y,:)=diim(x,y,:);

elseif(x~=size(diim,1)&& y~=size(diim,1))

tx1(x,y,:)=diim(x+1,y,:);

end

if(x~=size(diim,1) && y~=size(diim,2) && (mod((x+y),2)==0))

tx2(x,y,:)=diim(x+1,y,:);

else

tx2(x,y,:)=diim(x,y,:);

end

end

end

Alter Image:

Alt=alterim(i1,i2);

This function select the pixels alternatively from text image 1 & 2 and store in 'alt' text image respectively.

Where in

I1 = Text Image 1 to be encoded

I2 = Text image 2 to be encoded

Alt = Resulted encoded text image which have the pixel of i1 & i2 alternatively.

Selection of pixels from text images i1 & i2

Pixel taken from text image i1

Pixel taken from text image i2

IM Hide:

This function set the LSB bit of colour image as '1' or '0' based on the information available in the text image.

Hide = imhide(org, text);

Where in

Org - Original color image taken which is used to hide the text data.

Text - Text image which have the information of two text images in it's alternative pixel.

Hide - Encoded image which have the text data in its LSB.

To hide the Image:

Check the pixels gray value <=128

Set LSB ( 1st bit ) bit as ' 0 '.

Set LSB ( 1st bit ) bit as '1'.

Embedbp:

[int1] = embedbp(I,txt,b);

Embedbp embeds the data in the bth bit plane of the image

[INTl] = EMBEDBP(I, TXT, B) embeds the string TXT in the Bth bit - plane of the image I and returns the watermarked image INTL. If B is not specified, it is taken as 1.

Im Extract:

Txt = txtxtract ( IM );

This function extract the text data from the encoded color imag's LSB .

IM - Encoded color image which have the information of size text images.

Txt - Create a matrix which have the information about text data.

Check the LSB of color image as bit1.

Set gray value as 255 ( White )

Set gray value as 0 ( Black )

IM Extract:

This function extracts the text content from diim matrix created by us.

Where in

Diim - Text matrix which have the value ' 0 ' ( Black ) or ' 255' (White).

Txt1 - Extracted text image 1 from diim ( Even location of x+y)

Txt2 - Extracted text image from diim ( odd location of x+y )

To extract the image for text img1

Select the pixels in alternate number

Select the pixel from 'even' location of ( x+y )

Check whether the index of matrix exceeds the size

Select the pixel from next ( x+y+1 th ) location of ( x + y )

Extract the image for text image

Proposed Idea:

With the help of watermarking we can watermark the six different type of images into the color iris pattern image and secure the customer details and reports. With this we can hide and protect the data and keep it secure. So when the image is decoded for the verification process the data hidden in the color iris pattern image is also decoded and we can view the information.