Overview Of Finger Vein Recognition Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Finger vein recognition is one of the biometric methods to indentify a person base on their personal characteristic. Biometric is well develop nowadays and it is mainly apply on security issue. Biometric is still being improve continuously to increase the accuracy. Applying biometric need a few requirement which must be fulfil:

Uniqueness: Indicates that two different people cannot have the same characteristic.

Permanence: Characteristic cannot change according to the time.

Collectability: The characteristic can be measured quantitatively.

Performance: Acceptable result must be present from the identification process.

Circumvention: Referring to the ability to get destroyed.[1]

Finger vein recognition uses infrared light passing through the human hand skin and it is absorbed by the haemoglobin in the vein. With the infrared light absorbed by the haemoglobin, the finger vein can be visible through a CCD (charge-coupled device) camera.

There are few main parts for the vein recognition process:

Vein image acquisition


Feature extraction


Vein image acquisition

First process which gather information or sample (images) and saved all the data for each different person. As for this project, the image of finger veins is provided.


The image given will be crop out and enhancement and filtering will be done. The vein will be enhance until it is easy to visible. Filtering is to reduce the noise and filter out those unwanted object.

Feature extraction

During this process, dilation is used with the structuring element. Small object from the image will be removed and skeletonization will be applied for further process of matching.


For the last process, this stage dose comparison between the obtained data after process and the image from the database.

1.2 Technical Objective


Technical Objectives


To Design and develop a finger vein recognition using MATLAB algorithm.


To research a new method to extract finger vein based on the literature review.


To enhance the vein information by using filtering and threshold technique.


To produce the segmented object image by using morphological operation.


Develop a matching algorithm in order to evaluate finger vein extraction method.


2.1 Finger Vein Recognition

Finger vein recognition had been developing for decade. There are people develop finger vein recognition with several of software such as MATLAB and FPGA-based. There are several method which had been commonly apply on finger vein recognition such as median filter, Gabor filter, DT-CNN(Discrete Time-Cellular Neural Network) and GA(Generic Algorithm). All the process of extraction and enhancement are almost the same in ways that the combinations of the method apply. The matching results from different type of method also maybe vary.

2.2 Image Enhancement

For image enhancement, every image must convert to greyscale for further processing. Greyscale is a measurement of intensity. The range for the intensity of greyscale is from 0 to 255. 0 represent the darkest pixel while 255 is the brightest pixel. Enhancement can be done by reading the greyscale histogram of the image and intensity can be adjusted according to the user. From greyscale, binary form is one of the outcomes. Binary basically have only two greyscale, which is 0 and 1. 0 represent black while 1 represent white.

2.2.1 Median Filter

Median filter is mainly use in image or signal processing for noise reduction. It mainly applies during pre-processing to enhance and improve the image quality for further processing. This filter use a window consists of an area of the image which has several numbers of pixels. It then apply median to calculate the median for the area and the value is replace with the median value. This filter is much better compare with other filter as it maintains better sharp edges which it use back the median value pixel.



















Above left shows the original windows which consist of a 3 x 3 neighbourhood. The median value is calculated and the value is 143. The intensity values of the neighbourhood pixels will change according to the median value which shown above right.

Figure 2.1: Image before median filter Figure 2.2: Image after median filter [2]

2.1.2 Low-Pass Filter

Low pass filter is a sort of smoothing filter which it remove high spatial frequency components of the image. The most common low-pass filter is 2D Gabor filter. The definition of the 2D Gabor filter is

x and y are the coordinate of the image [I(x, y)] used. θ is the direction for the filter and µ is the frequency of complex sine function. Σ is the SV (standard variant) of Gaussian envelop along x and y axis. [3]

Below is the example shown for the effect of Gabor filter which apply mean = 0 and σ of 8. The image after Gabor filter has left noise which it reduces the noise and smoothen the image.

Figure 2.3: Image before Gabor filter Figure 2.4: Image after Gabor filter [2]

2.1.3 Image Normalization

Normalization is a process which changes the range of intensity values for a pixel. Image normalization is very useful in eliminating noise, occlusion or illumination. During the pre-processing stage, image normalization is applied when there is interference on the image, such as object recognition.

2.2 Image Segmentation

Segmentation in image processing is a process which partitioning an image into multiple segments. The usage of segmentation is to simplify or change the representation of an image into other things which can be analysis in an easy way. Image segmentation usually used to locate objects and boundaries in the images. Besides that, it assigned a label to every pixel in the image as the pixel with same label share certain visual characteristic. [4]

2.2.1 Global Thresholding

Thresholding is one of the simplest methods of image segmentation. [5] Global Thresholding can differentiate the important data which is the main object and the background. Global Thresholding collect the pixel intensity info for the whole image and values between certain ranges will be set to one and the rest will set to zero. The output for Thresholding is a binary image.

One of the famous method apply in Thresholding is Otsu's method. He suggested a method which threshold is minimized between different classes variance. His method is applied on the MATLAB toolbox which the function is "graythresh". The syntax is shown as below:

Level = graythresh (I)

Level is the normalized intensity value which is between the ranges of 0.0 to 1.0. im2bw is used to convert the threshold image.

2.2.2 Local Adaptive Thresholding

Global adaptive Thresholding may not be useful when background of the image has uneven illumination. To overcome this situation, a common method is apply which during the pre-processing of the image, illumination is compensated and global adaptive threshold is applied to the pre-processing image. Improvement can be done by applying a morphological top-hat operator and use graythresh later on the image. The process of thresholding f(x, y) is similar to threshold function T(x, y):


fo is the morphological opening of f, the result of function graythresh applied to fo is the constant To. [6]

There is an alternative way to find the local threshold is to examine the intensity values of the local neighbourhood of each pixel statically. This include the simple and fast functions of mean of the local intensity distribution,

T = mean

the median value,

T = median

or the mean of the maximum and minimum values,

T = (max + min)/2

Sufficient foreground and background pixels must have neighbourhood which have enough size to cover else poor threshold is chosen. Choosing larger regions can affect the assumption of approximately uniform illumination.

Alternative way to stimulate adaptive threshold operation with the following steps:

Convolve the image with a suitable statical operator, i.e. the mean or the median.

Subtract the original from the convolved image.

Threshold the difference image with C.

Invert the thresholded image.

2.3 Morphological Image Processing

After the process of image segmentation, the following process is morphology transforms. This process which used the image, that had been converted to binary or logic type. By selecting an appropriate shape for neighbours, the image can be constructed with a certain shape.

2.3.1 Dilation and Erosion

Dilation and Erosion is the fundamental to morphological image processing. [7] Dilation and Erosion basically are opposite of each other and this two functions only apply in binary image. Dilation

Dilation function is used to enlarge or expand the boundaries of regions of the foreground pixels. When the boundaries enlarge or expand, the hole within the regions will become smaller or even disappear. To effectively apply dilation, structure element is the key for the function. This will be discuss in section Erosion

Erosion works oppositely from dilation which it shrink or reduce the boundaries of regions of the foreground pixels. As the boundaries of the regions of the foreground keep shrinking, holes can be more visible and grow larger after eroding. Erosion functions also need structure elements to function properly. Structure element

Structure element have varies type of patterns and shapes. It mostly applies on to dilation and erosion. This every pattern has a specific coordinates of a number of discrete point relative to some origin. Cartesian coordinates is normally applied to it as it is much convenient to represent the element is as small image on a rectangular grid. The origin does not always must be at the center of the structure element, but sometimes other position also can be used. Figure below shows some of the structure element example. [8]

Figure 2.4: Some example of structure element. [8]

Types of structure element used in MATLAB with syntax:

se=strel ('diamond', R)

Creates a flat diamond shaped structure element with R as the as the distance from the origin of structure element to the extreme point of the diamond.

se=strel ('disk', R)

Creates a flat disk shaped structure element with R as the radius size from the origin structure element.

se=strel ('line' LEN, DEG)

Creates a flat linear structure element with LEN as the length and DEG as the angle (in degree). It measured in a counter clockwise direction from the horizontal axis.

se=strel ('octagon', R)

Creates a flat octagon shaped structure element with R as the distance from the origin of structure element to the side of the octagon, as measured along both horizontal and vertical axis. R must be a non-negative multiple of 3.

se= strel ('pair',OFFSET)

Creates a flat structure element contains of two members with OFFEST. The origin structure element is one of the members while OFFSET is the second member which can be specified by vector OFFSET. It must be a two-element vector integer.

se= strel ('periodicline', P, V)

Creates a flat structure element contains of 2*P + 1members. V is a two-element vector which contains the row and column offsets. One of the structure element is at the origin while others as located at 1*V, -1*V, 2*V, -2*V,..., P*V and -P*V.

se=strel ('rectangular', MN)

Creates a flat rectangular shaped structure element with MN as the two-element vector of non-negative integers. The first element is the rows while the second element is the columns.

se=strel ('square', W)

Creates a flat square shaped structure element with W as the width of the pixels. W must be a non-negative integer scalar.

se=strel ('arbitrary', NHOOD); se=strel (NHOOD)

Creates a arbitrary shaped of structure element. NHOOD contains the matrix of 0s and 1s that specifies the shapes. [9]

2.3.2 Thinning

Thinning is one of the morphological functions that used to remove selected foreground pixels from a binary image. This function normally used to join up the output of edge detector by shrink all lines to a single pixel thickness. Thinning function is related to the hit-and-miss transform. It can be express in simple terms, the thinning of an image I by a se (structure element) J is:

Thin (I, J) = I-hit-and-miss (I, J)

Whereby the subtraction is a logical subtraction can be defined by X-Y = X ∩NOT Y. [10]

2.3.3 Matching

Matching is the last step of an image processing. The matching matched between an image being process and a template image. This matching process basically applied correlation which is quite simple in principle. An image f(x, y) is correlate with a template image w(x, y) to find all the possible match location by using sum-comparing matrix (i.e. cross-correlation, SSD, SAD). The output maximum value is the similarity ratio where the results near to 1 have higher percentage of similarity. The template image must either have smaller or equal size compare with the image to be correlate.


3.1 Block diagram overview


ROI selection


Feature extraction


Figure 3.1: Block diagram overview

This project starts by read the image to be process. By select the ROI (Region Of Interest)of the image to select the region which contains finger veins. After the ROI had been crop out, the image will undergo a pre-processing. During the pre-processing stage, the image will be reposition by comparing with a image. Median filter is applied to remove the unwanted noise while maintaining the image details. Gaussian filter is applied to remove high spatial frequency components and smoothen the image. An additional normalization is applied to eliminate noise, occlusion and illumination which cannot be filter out by the previous two filters. For recognition process, normalization is very useful when there is interference o the image. Local adaptive thresholding is applied to isolate the important data which is the finger vein and the background. After the pre-processing stage, morphological operation is applied which is the feature extraction. Dilation and erosion is applied to reconstruct the data which allow the important data to be maintained for further process. Small and unwanted object is filter out and thinning process is applied where the vein line's is shrink to a single pixel size thickness. Lastly, the processed image will be match with the template database and results are obtained.

3.2 Project flowchart