Human Face Is One Of Most Common Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

The human face is one of the most common biometric patterns encountered daily by our visual system. Human being can recognize the face, irrespective of their facial expression, without any effort or delay, however reliable face recognition is still a challenge. A key challenge is achieving optimal preprocessing, feature selection-extraction, and correct classification, particularly under conditions of input data variability.

Basically the whole face recognition process is divided into two parts: feature selection-extraction and recognition. This project extracts the facial feature (eyebrow, eye, nose and mouth) fields from 2-D gray-level face images. The fundamental philosophy is that Eigen features, derived from the Eigen values and eigenvectors of the gray-level data set constructed from the feature fields, are very useful to locate these fields efficiently.

For obtaining better result in human face recognition, most of the current approaches require some control over imaging condition by considering the view or pose of head relative to camera, environment clutter and illumination, deformation of facial components, and their spatial relations or changes in the pigmentation of the face. During 1980's, the work on face recognition was not so popular. Due to the increase of interest on civilian-commercial research projects, real time computation, and the increasing need for surveillance-related application due to trafficking and terrorist activity, the research in face recognition has come up with a new challenge. Over the past several years extensive amount of work has been carried out for automatic recognition of facial expression by different researchers on various aspects of face recognition using parametric as well as non-parametric techniques. Image Processing Toolbox provides a comprehensive set of reference-standard algorithms and graphical tools for image Processing, analysis, visualization, and algorithm development.

EXISTING FACE RECOGNITION SYSTEM STRUCTURE:

Face Recognition is a term that includes several sub-problems. Different classifications of these problems have been mentioned in the bibliography. Some of them will be explained on this section. Finally, a general or unified classification will be proposed

Figure 1. System architecture of the face recognition problem.

Turk and Pent land have introduced a PCA-based face recognition method. In this method, the input images are projected onto the face space using principle component analysis, and these projected faces are called Eigen faces. These Eigen faces are used as feature vectors for recognition purpose. The recognition is performed using various distance classifiers with an appropriate threshold value. However, calculation of threshold value, which plays an important role in correct classification, is a difficult task.

PROPOSED FACE RECOGNITION ALGORITHM:

The proposed face recognition system consists several modules which are Image Acquisition, Pre-processing, feature extraction and classification module.

1. The acquisition module: This is the entry point of the face recognition process. It is the primary module. Basically in this module, the face image under consideration is presented to the system. In other words, the user is asked to present a face image to the face recognition system in this module. An acquisition module can request a face image from several different environments: The face image can be an image file that is located on a magnetic disk, it can be captured by a frame grabber and camera or it can be scanned from paper with the help of a scanner.

2. The pre-processing module: In this module, early vision techniques are taken into consideration and the face images are normalized and if desired, they are enhanced to improve the recognition performance of the system. Some or all of the pre-processing steps may be implemented in a face recognition system.

3. The feature extraction module: After performing some pre-processing (if it is required), the normalized face image is presented to the feature extraction module in order to find the key features that are going to be used for classification. In short, this module is responsible for composing a feature vector that is well enough to represent the face image.

4. The classification module: In this module, with the help of a pattern classifier, extracted features of the face image is compared with the ones stored in a face library (or face database). After doing this comparison, face image result is observed as either known or unknown.

BACKGROUND OF THE PROPOSED METHOD:

Principle Component Analysis (PCA): Face recognition system uses the Principal Component Analysis (PCA) algorithm. Automatic face recognition systems try to find the identity of a given face image according to their movement invariant and memory. The movement invariant and memory of a face recognizer is generally simulated by a training set. In this project, training set consists of the features extracted from known face images of different persons. Thus, the task of the face recognizer is to find the most similar feature vector among the training set to the feature vector of a given test image.

BLOCK DIAGRAM:

Capture.PNG

OUTLINE OF THE PROJECT:

Start

End

Display match found

Display match is not found

Determine if face is known or unknown

Calculate the minimized Euclidean distance of test image

Calculate Eigen vectors of the correlation matrix

Calculate mean of all images

Load training set images into the Database

CHAPTER-II

BIO-METRIC AUTHENTICATION

Biometrics is an automated method of identifying a person or verifying the identity of a person based on a physiological or behavioral characteristic. Examples of physiological characteristics include hand or finger images, facial characteristics. Biometric authentication compares a registered or enrolled biometric sample (biometric template or identifier) and a newly captured biometric sample (for example, captured image during a login). During, as shown in the picture below, a sample of the biometric trait is captured, processed by a computer, and stored for later comparison.

Biometric recognition can be used in mode, where the biometric system identifies a person from the entire enrolled population by searching a database for a match based solely on the biometric. Sometime identification is called "one-to-many" matching. A system can also be used in mode, where the biometric system authenticates a person's claimed identity from their previously enrolled pattern. This is also called "one-to-one" matching. In most computer access or network access environments, verification mode would be used.

Figure: Sample process

Types of Bio-Metrics:

2.1 FACE RECOGNITION:

The identification of a person by their facial image can be done in a number of different ways such as: by capturing an image of the face in the visible spectrum using an inexpensive camera or by using the infrared patterns of facial heat emission. Facial recognition in visible light typically model key features from the central portion of a facial image. Using a wide assortment of cameras, the visible light systems extract features from the captured image(s), which do not change over time while avoiding superficial features such as facial expressions or hair. Several approaches to modeling facial images in the visible spectrum are Principal Component Analysis, Local Feature Analysis, neural networks, elastic graph theory, and multi-resolution analysis.

Some of the challenges of facial recognition in the visual spectrum include reducing the impact of variable lighting and detecting a mask or photograph. Some facial recognition systems may require a stationary or posed user in order to capture the image, though many systems use a real time process to detect a person's head and locate the face automatically. Major benefits of facial recognition are that it is non-intrusive, hands-free, continuous and accepted by most users.

2.2 FINGERPRINTS:

Fingerprints are unique for each finger of a person including identical twins. One of the most commercially available biometric technologies, nowadays is fingerprint recognition devices for desktop and laptop access which are widely available from many different vendors at a low cost. With these devices, users no longer need to type passwords - instead, only a finger scan (which will be recognized by the desktop or laptop initially) provides instant access. Fingerprint systems can also be used in identification mode. Several states check fingerprints for new applicants to social services benefits to ensure recipients do not fraudulently obtain benefits under fake names.

2.3 IRIS RECOGNITION:

This recognition method uses the iris of the eye, which is the colored area that surrounds the pupil. Iris patterns are thought unique. The iris patterns are obtained through a video-based image acquisition system. Iris scanning devices have been used in personal authentication applications for several years. Systems based on iris recognition have substantially decreased in price and this trend is expected to continue. The technology works well in both verification as well as identification modes (in systems performing one-to-many searches in a database). Current systems can be used even in the presence of eyeglasses and contact lenses. The technology is not intrusive. It does not require physical contact with a scanner. Iris recognition has been demonstrated to work with individuals from different ethnic groups and nationalities.

2.4 SIGNATURE VERIFICATION:

This technology uses the dynamic analysis of a signature to authenticate a person. The technology is based on measuring speed, pressure and angle used by the person when a signature is produced. One focus for this technology has been e-business applications and other applications where signature is an accepted method of personal authentication.

2.5 SPEAKER RECOGNITION:

Speaker recognition has a history dating back to some four decades, where the outputs of several analog filters were averaged over time for matching. Speaker recognition uses the acoustic features of speech that have been found to differ between individuals. These acoustic patterns reflect both anatomy (e.g., size and shape of the throat and mouth) and learned behavioral patterns (e.g., voice pitch, speaking style). This incorporation of learned patterns into the voice templates (the latter called "voiceprints") has earned speaker recognition its classification as a "behavioral biometric." Speaker recognition systems employ three styles of spoken input: text-dependent, text-prompted and text independent. Most speaker verification applications use text-dependent input, which necessarily involves selection and enrollment of one or more voice passwords. Text-prompted input is used whenever there is concern that the user is an imposter. The various technologies used to process and store voiceprints includes hidden Markov models; pattern matching algorithms, neural networks, and matrix representation and decision trees. Some systems also use "anti-speaker" techniques, such as cohort models, and world models.

Ambient noise levels can impede both collections of the initial and subsequent voice samples. Performance degradation can result from changes in behavioral attributes of the voice and from enrollment using one telephone and verification on another telephone. Voice changes due to aging also need to be addressed by recognition systems. Many companies market speaker recognition engines, often as part of large voice processing, control and switching systems. Capture of the biometric is considered non-invasive. The technology needs some little additional hardware, using existing microphones and voice-transmission technology allows recognition over long distances via ordinary telephones (wire line or wireless).

In the project, we concentrate on face recognition approach out of these biometric approaches.

III FACE RECOGNITION USING PCA:

INTRODUCTION:

The face is our primary focus of attention, playing a major role in conveying identity and emotion. We can recognize thousands of faces throughout our lifetime and identify familiar faces at a glance after years of separation. This skill is quite robust, despite large changes in the visual stimulus due to viewing conditions, expression, aging, and distractions such as glasses or changes in hairstyle or facial hair.

Computational models of face recognition, in particular, are interesting because they can contribute not only to theoretical insights but also to practical applications. Computers that recognize faces could be applied to a wide variety of problems, including criminal identification, security systems, image and film processing, and human computer interaction. Unfortunately, developing a computational model of face recognition is quite difficult, because faces are complex, multidimensional, and meaningful visual stimuli.

The user should focus his attention towards developing a sort of early, pre attentive Pattern recognition capability that does not depend on having three-dimensional information or detailed geometry. A computational model of face recognition should be developed that is fast, reasonably simple, and accurate.

Automatically learning and later recognizing new faces is practical within this framework. Recognition under widely varying conditions is achieved by training on a limited number of characteristic views. This approach has advantages over other face recognition schemes in its speed and simplicity learning capacity.

Images of faces, represented as high-dimensional pixel arrays, often belong to a manifold of intrinsically low dimension. Face recognition, and computer vision research in general, has witnessed a growing interest in techniques that capitalize on this observation, and apply algebraic and statistical tools for extraction and analysis of the underlying manifold. Eigen face is a face recognition approach that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals.

II.2 FACE SPACE AND ITS DIMENSIONALITY:

Computer analysis of face images deals with a visual signal (light reflected of the surface of a face) that is registered by a digital sensor as an array of pixel values. The pixels may encode color or only intensity. After proper normalization and resizing to a fixed m-by-n size, the pixel array can be represented as a point (i.e. vector) in a m-by-n-dimensional image space. This is done by simply writing its pixel values in a fixed (typically raster) order. A critical issue in the analysis of such multi-dimensional data is the dimensionality, which is the number of coordinate necessary to specify a data point.

II.3 IMAGE SPACE Vs FACE SPACE:

In order to specify an arbitrary image in the image space, one needs to specify every pixel value. Thus the "nominal" dimensionality of the space, dictated by the pixel representation, is mn - a very high number even for images of modest size However, much of the surface of a face is smooth and has regular texture. Therefore, per-pixel sampling is in fact unnecessarily dense. The value of a pixel is typically highly correlated with the values of the surrounding pixels. Moreover, the appearance of faces is highly constrained; for example, any frontal view of a face is roughly symmetrical, has eyes on the sides, nose in the middle, etc. A vast proportion of the points in the image space do not represent physically possible faces. Thus, the natural constraints dictate that the face images will in fact be confined to a subspace, which is referred to as the face space.

II.4 PRINCIPAL MAINFOLD AND BASIS FUNCTIONS:

It is common to model the face space as a (possibly disconnected) principal manifold, embedded in the high-dimensional image space. Its intrinsic dimensionality is determined by the number of degrees of freedom within the face space, the goal of subspace analysis is to determine this number, and to extract the principal modes of the manifold. The principal modes are computed as functions of the pixel values and referred to as basic functions of the principal manifold.

To make these concepts concrete, consider a straight line in R3, passing through the origin and parallel to the vector a=[a1;a2;a3]T Any point on the line can be described by 3 coordinates; nevertheless, the subspace that consists of all points on the line has a single degree of freedom, with the principal mode corresponding to translation along the direction of a. Consequently, representing the points in this subspace requires a single basis function:

The analogy here is between the line and the face space R3, and between and the image space.

II.5 PRINCIPAL COMPONENT ANALYSIS:

Principal Component Analysis (PCA) is a dimensionality reduction technique based on extracting the desired number of principal components of the multi-dimensional data. The purpose of PCA is to reduce the large dimensionality of the data space (observed variables) to the smaller intrinsic dimensionality of feature space (independent variables), which are needed to describe the data economically. This is the case when there is a strong correlation between observed variables. The first principal component is the linear combination of the original dimensions that has the maximum variance; the n-th principal component is the linear combination with the highest variance, subject to being orthogonal to the n -1 first principal components.

An important and largely unsolved problem in dimensionality reduction is the choice of k-the intrinsic dimensionality of the principal manifold. No analytical derivation of this number for a complex natural visual signal is available to date. To simplify this problem, it is common to assume that in the noisy embedding of the signal of interest (in our case, a point sampled from the faces pace) in a high-dimensional space, the signal-to-noise ratio is high. Statistically, that means that the variance of the data along the principal modes of the manifold is high compared to the variance within the complementary space.

This assumption relates to the Eigen spectrum - the set of the Eigen values of the data covariance matrix. The i-th Eigen value is equal to the variance along the i-th principal component; thus, a reasonable algorithm for detecting k is to search for the location along the decreasing Eigen spectrum where the value of i, drops significantly Since the basis vectors constructed by PCA had the same dimension as the input face images, they were named "Eigen faces".

PCA is an information theory approach of coding and decoding face images may give insight into the information content of face images, emphasizing the significant local and global "features". Such features may or may not be directly related to face features such as eyes, nose, lips, and hair.

In the language of information theory, we want to extract the relevant information in a face image, and then encode it as efficiently as possible, and compare one face encoding with a database of models encoded similarly. A simple approach to extracting the information contained in an image of face is to somehow capture the variation in a collection of images, independent of any judgment of features, and use this information to encode and compare individual face images.

These eigenvectors can be thought of as a set of features that together characterize the variation between face images. Each image location contributes more or less of each eigenvector, so that we can display the eigenvector as a sort of ghostly face which we call an Eigen face. Each individual face can be represented exactly in terms of a linear combination of the Eigen faces. Each face can also be approximated using only the "best" Eigen faces-those that have the largest Eigen values and which therefore account for the most variance within the set of face images. The best M Eigen faces span an M-Dimensional subspace- "face space" - of all possible images.

This approach of face recognition involves the following operations:

Initialization operations:

1. Acquire an initial set of face images (the training set).

2. Calculate the Eigen faces from the training set, keeping only the M images that correspond to the highest Eigen values. These M images define the face space. As new faces are experienced; the Eigen faces can be up-dated or recalculated.

3. Calculate the corresponding distribution in M-dimensional weight space for each known individual, by projecting his or her face images onto the "face space".

Recognize new face images:

1. Calculate a set of weights based on the input image and the M Eigen faces by projecting the input image onto each of the eigenfaces.

2. Determine if the image is a face by checking to see if the image is sufficiently close to "face space".

3. If it is a face, classify the weight pattern as either a known person or as unknown.

4. (Optional) Update the Eigen faces and/or weight patterns.

5. (Optional) If the same unknown face is seen several times, calculate its characteristic weight pattern and incorporate into the known faces.

II.7 CLASSIFY A EIGEN FACE:

A new face image () is transformed into its Eigen face components (projected into "face space") by a simple operation,

For k = 1,…..,M'. This describes a set of point-by-point image multiplications and summations, operations performed at approximately frame rate on current image and its processing hardware. The weights form a vector ï- = [1,2…..'] that describes the contribution of each Eigen face in representing the input face image, treating the Eigen face as a basis set for face images. The vector may be used in a standard pattern recognition algorithm to find which of a number of predefined classes, if any best describes the face. The simplest method for determining of an input face image is to find the face class k that minimizes the Euclidian distance

Where ï-k is a vector describing the kth face class. The face classes ï-I are calculated by averaging the results of the Eigen face representation over a small number of face images of each individual. A face is classified as belonging to class k when minimum k is below some chosen threshold k. Otherwise the face is classified as "unknown" and optionally creates a new face class. Because creating the vector of weights is equivalent to projecting the original face image onto the low dimensional face space, many images will project onto a given pattern vector. The distance between the image and the face space is simply the squared distance between the mean adjusted input image = - and f = M'i = 1 iui , , its projection onto face space:

More on the Face Space:

To conclude this post, here is a brief discussion on the face space.

face_spaceFig: Face Space

Consider a simplified representation of the face space as shown in the figure above. The images of a face and in particular the faces in the training set should lie near the face space. This in general describes images that are face like. The projection distance er should be under a threshold as already seen. The images of known individual fall near some face class in the face space.

There are four possible combinations on where an input image can lie:

1. Near a face class and near the face space: This is the case when the probe is nothing but the facial image of a known individual (known = image of this person is already in the database).

2. Near face space but away from face class: This is the case when the probe image is of a person (i.e. a facial image), but does not belong to anybody in the database i.e. away from any face class.

3. Distant from face space and near face class: This happens when the probe image is not of a face however it still resembles a particular face class stored in the database.

4. Distant from both the face space and face class: When the probe is not a face image i.e. is away from the face space and is nothing like any face class stored.

Out of the four, type 3 is responsible for most false positives. To avoid them, face detection is recommended to be a part of such a system.

II.6 CALCULATING EIGEN FACES:

Images of faces, being similar in overall configuration, will not be randomly distributed in the huge space and thus can be distributed by a relatively low dimensional subspace. The main idea of principal component analysis is to find the vectors that best account for the distribution of face images within the entire image space. These vectors define the subspace of face images, which we call "face space". Each vector is of length N square, describes an N-by-N image, and is a linear combination of original face images, and because they are face-like in appearance, refer them as "Eigen faces".

1. There are M images in the training set.

2. There are K most significant Eigen faces using which we can satisfactorily approximate a face. Needless to say K < M.

3. All images are N x N matrices, which can be represented as N2x1 dimensional vectors. The same logic would apply to images that are not of equal length and breadths. To take an example: An image of size 112 x 112 can be represented as a vector of dimension 12544 or simply as a point in a 12544 dimensional space.

Algorithm for Finding Eigen faces:

1. Obtain training images I1,I2, … IM, it is very important that the images are centered.

training-images

2. Represent each image Ii as a vector \Gamma_i as discussed above.

matrix.PNG

3. Find the average face vector .

\Psi = \displaystyle\frac{1}{M}\sum_{i=1}^M\Gamma_i

4. Subtract the mean face from each face vector \Gamma_i to get a set of vectors \Phi_i. The purpose of subtracting the mean image from each image vector is to be left with only the distinguishing features from each face and "removing" in a way information that is common.

\Phi_i = \Gamma_i - \Psi

5. Find the Covariance matrix C:

C=AAT , where A=[\Phi_1, \Phi_2 \ldots \Phi_M]

Note that the Covariance matrix has simply been made by putting one modified image vector obtained in one column each.

Also note that C is a N2 x N2 matrix and A is a N2xM matrix.

6. We now need to calculate the Eigenvectors of ui of C, However note that C is a N2 x N2 matrix and it would return N2 Eigenvectors each being N2 dimensional. For an image this number is HUGE. The computations required would easily make your system run out of memory. How do we get around this problem?

7. Instead of the Matrix AAT consider the matrix ATA. Remember A is a N2xM matrix, thus ATA is a MxM matrix. If we find the Eigenvectors of this matrix, it would return M Eigenvectors, each of Dimension Mx1 , let's call these Eigenvectors vi.

Now from some properties of matrices, it follows that : ui = Avi . We have found out vi earlier. This implies that using vi we can calculate the M largest Eigenvectors of AAT. Remember that M<<N2 as M is simply the number of training images.

8. Find the best M Eigenvectors of C=AAT by using the relation discussed above. That is: ui=Avi. Also keep in mind that ||ui||=1.

9. Select the best K Eigenvectors, the selection of these Eigenvectors is done heuristically.

Finding Weights:

The Eigenvectors found at the end of the previous section, ui when converted to a matrix in a process that is reverse to that in STEP 2, have a face like appearance. Since these are Eigenvectors and have a face like appearance, they are called Eigen faces. Sometimes, they are also called as Ghost Images because of their weird appearance.

Now each face in the training set (minus the mean), \Phi_i can be represented as a linear combination of these Eigenvectors ui:

matrix1.PNG where uj's are Eigen faces.

These weights can be calculated as :

w_j = u_j^T\Phi_i

Each normalized training image is represented in this basis as a vector.

untitled

where i = 1,2… M. This means we have to calculate such a vector corresponding to every image in the training set and store them as templates.

To calculate Energy Level:

function value = energyLevel(aMatrix)

% Obtain the Matrix elements... r - rows, c - columns.

[r, c] = size(aMatrix);

%Obtain energyLevel...

value = sum(sum(abs(aMatrix)))/(r*c);

3.7 Recognition Task:

Now consider we have found out the Eigen faces for the training images , their associated weights after selecting a set of most relevant Eigen faces and have stored these vectors corresponding to each training image.

If an unknown probe face \Gamma is to be recognized then:

1. We normalize the incoming probe \Gamma as \Phi = \Gamma - \Psi .

2. We then project this normalized probe onto the Eigen space (the collection of Eigenvectors/faces) and find out the weights.

w_i = u_i^T\Phi

3. The normalized probe can then simply be represented as:

untitled

After the feature vector (weight vector) for the probe has been found out, we simply need to classify it. For the classification task we could simply use some distance measures or use some classifier like Support Vector Machines. In case we use distance measures, classification is done as:

Find e_r = min\begin{Vmatrix}\Omega - \Omega_i\end{Vmatrix} . This means we take the weight vector of the probe we have just found out and find its distance with the weight vectors associated with each of the training image.

And if e_r < \Theta, where \Theta is a threshold chosen heuristically, then we can say that the probe image is recognized as the image with which it gives the lowest score.

If however e_r > \Theta then the probe does not belong to the database. I will come to the point on how the threshold should be chosen.

For distance measures the most commonly used measure is the Euclidean Distance. The other being the Mahalanobis Distance. The Mahalanobis distance generally gives superior performance. Let's take a brief digression and look at these two simple distance measures and then return to the task of choosing a threshold.

Distance Measures:

Euclidean Distance: The Euclidean Distance is probably the most widely used distance metric. It is a special case of a general class of norms and is given as:

\displaystyle\begin{Vmatrix}x-y\end{Vmatrix}_e = \displaystyle\sqrt{\begin{vmatrix}x_i-y_i\end{vmatrix}^2}

To calculate Euclidean Distance:

function value = euclideanDistance(X, Y)

[r, c] = size(X); % The length of the vector...

e = [];

% Euclidean Distance = sqrt [ (x1-y1)^2 + (x2-y2)^2 + (x3-y3)^2 ...]

for i = 1:c

e(i) = (X(i)-Y(i))^2;

end

Euclid = sqrt(sum(e));

%Obtain energyLevel...

value = Euclid;

The Mahalanobis Distance: The Mahalanobis Distance is a better distance measure when it comes to pattern recognition problems. It takes into account the covariance between the variables and hence removes the problems related to scale and correlation that are inherent with the Euclidean Distance. It is given as:

d(x,y) =\sqrt{ (x-y)^TC^{-1}(x-y)}

Where is the covariance between the variables involved.

Deciding on the Threshold: Why is the threshold, important?

Consider for simplicity we have ONLY 5 images in the training set. And a probe that is not in the training set comes up for the recognition task. The score for each of the 5 images will be found out with the incoming probe. And even if an image of the probe is not in the database, it will still say the probe is recognized as the training image with which its score is the lowest. Clearly this is an anomaly that we need to look at. It is for this purpose that we decide the threshold. The threshold is decided heuristically.

3.8 Face recognition by minimum distance classifier:

The minimum distance classifier (MDC) is used in various areas of pattern recognition because it is simple and fast compared with other complicated classifiers. It classifies an unknown pattern into a category to which the nearest prototype to the pattern belongs. Although the classification accuracy of the MDC is often lower than more complicated classifiers (e.g., quadratic discriminate functions or neural networks), the MDC has been used in various areas of pattern recognition because it is intuitively understandable, simple and fast. Let x be an unknown pattern to be classified and 𝑧ð‘- (ð‘- = 1... 𝑛) be a prototype for category 𝜔ð‘-. x and z are m-dimensional vectors in the feature space, n is the number of categories, and m is the number of dimensions of the feature space.

The MDC is defined as: 𝑥 ðœ- 𝜔ð‘-, ð‘-𝑓 𝑑ð‘-𝑠𝑡(𝑥, 𝑧ð‘-) = 𝑚ð‘-𝑛{𝑑ð‘-𝑠𝑡(𝑥, 𝑧ð‘-)}, ∀ð‘-

Where dist (â‹…) is the Euclidean distance function

𝑑ð‘-(𝑥, 𝑧ð‘-) =√∑mk=1(xl -zik)2

After mapping the training images from image space to face space and calculating the mean of each individual class, now we are going to perform face recognition based on minimum distance classifier concept, relatively to the mean of each class. This concept avoids the problem of calculating correct threshold value for recognition purpose. At first, we take an image (Γ𝑡𝑒𝑠𝑡) as test image and get the normalized image (𝜙𝑡𝑒𝑠𝑡) by subtracting the mean from it. The following steps outline our approach.

1) Projection of the normalized test image onto face space by:

Ω𝑡𝑒𝑠𝑡 = 𝑢𝑇 𝜙𝑡𝑒𝑠𝑡

where 𝑢 = [𝑢1, 𝑢2, . . . , 𝑢𝐸], Ω𝑡𝑒𝑠𝑡 = face space test image.

2) Calculation of mean of each class including the test image within each class using

𝜇𝑘 = 1/(𝑃 + 1)(

Where k=1 to C, 𝜇𝑘 = mean of each class including test image,

Ω𝑘ð‘-= ð‘-𝑡ℎ face space image of 𝑘𝑡ℎ class

C = no. of classes (each person belongs to a separate class), and P = no. of images per class.

3) Find out the differences between previous mean (without test image) and new mean (with test image) for each class using

𝐷𝑘 = 𝜉𝑘 − 𝜇𝑘, 𝑘 = 1 𝑡𝑜 𝐶

where 𝐷𝑘 = difference between old and new mean value of 𝑘𝑡ℎ class.

Figure : Eigen faces of all the training images

4) Computing the minimum value among all mean differences computed in step 3 by the

𝐷𝑚ð‘-𝑛 = Min (𝐷𝑘), 𝑘 = 1 𝑡𝑜 𝐶

Where 𝐷𝑚ð‘-𝑛 = minimum value among all 𝐷𝑘, and the minimum operation is denoted by Min.

5) If 𝐷𝑚ð‘-𝑛 tends to zero then it is a known image (𝜙𝑡𝑒𝑠𝑡ðœ-𝐼), hence display the class which has minimum mean difference. Otherwise, it is an unknown image.

http://onionesquereality.wordpress.com/2009/02/11/face-recognition-using-eigenfaces-and-distance-classifiers-a-tutorial/

IV. DIGITAL IMAGE PROCESSING

4.1 Images and Digital Images

• A digital image differs from a photo. All the values are discrete in a digital image.

• Only integer values are accepted by the digital image.

• A digital image can be considered as a large array of discrete dots, each of which has a brightness associated with it. These dots are called picture elements, or more simply pixels.

• The pixels surrounding a given pixel constitute its neighborhood A neighborhood can be characterized by its shape in the same way as a matrix: we can speak of a 3x3 neighborhood, or of a 5x7 neighborhood.

Each pixel has a color. The color is a 32-bit integer. The first eight bits determine the redness of the pixel, the next eight bits the greenness, the next eight bits the blueness, and the remaining eight bits the transparency of the pixel.

Types of Digital Images:

• Binary: Each pixel is just black or white. Since there are only two possible values for each pixel (0,1), we only need one bit per pixel.

• Grayscale: Each pixel is a shade of gray, normally from 0 (black) to 255 (white). This range means that each pixel can be represented by eight bits, or exactly one byte. Other grey scale ranges are used, but generally they are a power of 2.

• True Color, or RGB: Each pixel has a particular color; that color is described by the amount of red, green and blue in it. If each of these components has a range 0-255, it gives a result of 2563 different possible colors. Such an image is a "stack" of three matrices; representing the red, green and blue values for each pixel. This means that for every pixel there correspond 3 values.

4.2 IMAGE FILE SIZE:

Image file size is expressed as the number of bytes that increases with the number of pixels composing an image, and the color depth of the pixels. The greater the number of rows and columns, the greater the image resolution, and the larger the file. Also, each pixel of an image increases in size when its color depth increases, an 8-bit pixel (1 byte) stores 256 colors, a 24-bit pixel (3 bytes) stores 16 million colors, the latter known as true color.

High resolution cameras produce large image files, ranging from hundreds of kilobytes to megabytes, per the camera's resolution and the image-storage format capacity. High resolution digital cameras record 12 megapixel (1MP = 1,000,000 pixels / 1 million) images, or more, in true color. For example, an image recorded by a 12 MP camera; since each pixel uses 3 bytes to record true color, the uncompressed image would occupy 36,000,000 bytes of memory, a great amount of digital storage for one image, given that cameras must record and store many images to be practical. Faced with large file sizes, both within the camera and a storage disc, image file formats were developed to store such large images.

4.3 IMAGE FILE FORMATS:

Image file formats are standardized means of organizing and storing images. This entry is about digital image formats used to store photographic and other images. Image files are composed of either pixel or vector (geometric) data that are raster zed to pixels when displayed (with few exceptions) in a vector graphic display. Including proprietary types, there are hundreds of image file types. The PNG, JPEG, and GIF formats are most often used to display images on the Internet.

raster-vector-format

Fig

The format used in this project is PGM (Portable gray map, a graphics file format), Matlab supports PGM format in simulation.

4.4 IMAGE PROCESSING:

Digital image processing, the manipulation of images by computer, is recent development in terms of man's ancient fascination with visual stimuli. In its short history, it has been applied to practically every type of images with varying degree of success. The inherent subjective appeal of pictorial displays attracts perhaps a disproportionate amount of attention from the scientists and also from the layman. Digital image processing like other glamour fields, suffers from myths, misconnect ions, misunderstandings and misinformation. It is vast umbrella under which fall diverse aspect of optics, electronics, mathematics, photography graphics and computer technology. It is truly multidisciplinary endeavor ploughed with imprecise jargon.

Several factor combine to indicate a lively future for digital image processing. A major factor is the declining cost of computer equipment. Several new technological trends promise to further promote digital image processing. These include parallel processing mode practical by low cost microprocessors, and the use of charge coupled devices (CCDs) for digitizing, storage during processing and display and large low cost of image storage arrays.

4.5 FUNDAMENTAL STEPS IN DIGITAL IMAGE PROCESSING:

Fig

4.5.1 Image Acquisition:

Image Acquisition is used to acquire a digital image. It requires an image sensor and the capability to digitize the signal produced by the sensor. The sensor could be monochrome or color TV camera that produces an entire image of the problem domain every 1/30 sec. the image sensor could also be a line scan camera that produces a single image line at a time. In this case, the objects motion past the line.

4.5.2 Image Enhancement:

Image enhancement is among the simplest and most appealing areas of digital image processing. Basically, the idea behind enhancement techniques is to bring out detail that is obscured, or simply to highlight certain features of interesting an image. A familiar example of enhancement is when we increase the contrast of an image because "it looks better." It is important to keep in mind that enhancement is a very subjective area of image processing.

4.5.3 Image restoration:

Image restoration is an area that also deals with improving the appearance of an image. However, unlike enhancement, which is subjective, image restoration is objective, in the sense that restoration techniques tend to be based on mathematical or probabilistic models of image degradation.

photo_restoration_lg

4.5.6 Compression:

Compression, as the name implies, deals with techniques for reducing the storage required saving an image, or the bandwidth required for transmitting it. Although storage technology has improved significantly over the past decade, the same cannot be said for transmission capacity. This is true particularly in uses of the Internet, which are characterized by significant pictorial content. Image compression is familiar to most users of computers in the form of image file extensions, such as the jpg file extension used in the JPEG (Joint Photographic Experts Group) image compression standard.

4.5.8 Segmentation:

Segmentation procedures partition an image into its constituent parts or objects. In general, autonomous segmentation is one of the most difficult tasks in digital image processing. A rugged segmentation procedure brings the process a long way toward successful solution of imaging problems that require objects to be identified individually. On the other hand, weak or erratic segmentation algorithms almost always guarantee eventual failure. In general, the more accurate the segmentation, the more likely recognition is to succeed.

4.5.9 Representation and description:

Representation and description almost always follow the output of a segmentation stage, which usually is raw pixel data, constituting either the boundary of a region (i.e., the set of pixels separating one image region from another) or all the points in the region itself. In either case, converting the data to a form suitable for computer processing is necessary. The first decision that must be made is whether the data should be represented as a boundary or as a complete region. Boundary representation is appropriate when the focus is on external shape characteristics, such as corners and inflections. Regional representation is appropriate when the focus is on internal properties, such as texture or skeletal shape. A method must also be specified for describing the data so that features of interest are highlighted. Description, also called feature selection, deals with extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another.

4.6 COMPONENTS OF AN IMAGE PROCESSING SYSTEM:

As recently as the mid-1980s, numerous models of image processing systems being sold throughout the world were rather substantial peripheral devices that attached to equally substantial host computers. Late in the 1980s and early in the 1990s, the market shifted to image processing hardware in the form of single boards designed to be compatible with industry standard buses and to fit into engineering workstation cabinets and personal computers.

Fig

Although large-scale image processing systems still are being sold for massive imaging applications, such as processing of satellite images, the trend continues toward miniaturizing and blending of general-purpose small computers with specialized image processing hardware. Figure 1.24 shows the basic components comprising a typical general-purpose system used for digital image processing. The function of each component is discussed in the following paragraphs, starting with image sensing.

IMPLEMENTATION OF FACE RECOGNITON IN MATLAB

The entire sequence of training and testing is sequential and can be broadly classified as consisting of following two steps:

1. Database Preparation

2. Training

3. Testing

The steps are shown below.

Input Data

Recognize input data

Testing

Scan Database

Training

Database Preparation

Flowchart indicating the sequence of implementation

3.1 DATABASE PREPARATION:

The database was obtained with 10 photographs of each person at different viewing angels and different expressions. There are 5 persons in database. The Database is kept in the train folder which contains subfolders for each person having all his/her photographs. Database was also prepared for testing phase by taking 4-5 photographs of persons in different expressions and viewing angles but in similar conditions (such as lighting, background, distance from camera etc.) using a low resolution camera. And these images were stored in test folder.

3.2 TRAINING:

1. Select any one (.PGM) file from train database using open file dialog box.

2. By using that, read all the faces of each person in train folder.

3. Normalize all the faces.

4. Find significant Eigenvectors of Reduced Covariance Matrix.

5. Hence calculate the Eigenvectors of Covariance Matrix.

6. Calculate Recognizing Pattern Vectors for each image and average RPV for each person

7. For each person calculate the maximum out of the distances of his entire image RPVs from average RPV of that person.

Calculate RPV

Calculate Eigen values and Eigen vector of covariance matrix)

Calculate Eigen values and Eigenvector of RCM (region covariance matrix)

Read and Normalized faces

Select file from train database

Start

Calculate Distance

End

Flowchart for Training

Data base code:

function data

fid = fopen('database.txt', 'a');

f1='A';

f2='B';

f3='C';

f4='D';

f5='E';

for i=1:10

a=num2str(i);

% b='.jpg';

c1='.pgm';

filename=strcat(f1,a,c1);

fprintf(fid,'%s\r',filename);

end

for i=1:10

a=num2str(i);

% b='.jpg';

c1='.pgm';

filename=strcat(f2,a,c1);

fprintf(fid,'%s\r',filename);

end

for i=1:10

a=num2str(i);

% b='.jpg';

c1='.pgm';

filename=strcat(f3,a,c1);

fprintf(fid,'%s\r',filename);

end

for i=1:10

a=num2str(i);

% b='.jpg';

c1='.pgm';

filename=strcat(f4,a,c1);

fprintf(fid,'%s\r',filename);

end

for i=1:10

a=num2str(i);

% b='.jpg';

c1='.pgm';

filename=strcat(f5,a,c1);

fprintf(fid,'%s\r',filename);

end

fclose(fid);

3.3 TESTING:

Testing is carried out by following steps:

1. Select an image which is to be tested using open file dialog box.

2. Image is Read and normalize.

3. Calculate the RPV(Recognizing Pattern Vectors) of image using Eigenvector of Covariance Matrix.

4. Find the distance of this input image RPV from average RPVs of all the persons.

5. Find the person from which the distance is minimum.

Start6. If this minimum distance is less than the maximum distance of that person calculated during training than the person is identified as this person.

No match found

Find minimum distance and corresponding person

Find distance from average RPV's

Calculate RPV of image

Read and normalize image

Select file from test database

End

Stop

Image is identified of the person

Min distance<= maximum distance of the person No Fig. Flowchart for testing

MATLAB SOURCE CODE:

The following is the code for face recognition:

function varargout = guidemo(varargin)

gui_Singleton = 1;

gui_State = struct('gui_Name', mfilename, ...

'gui_Singleton', gui_Singleton, ...

'gui_OpeningFcn', @guidemo_OpeningFcn, ...

'gui_OutputFcn', @guidemo_OutputFcn, ...

'gui_LayoutFcn', [] , ...

'gui_Callback', []);

if nargin & isstr(varargin{1})

gui_State.gui_Callback = str2func(varargin{1});

end

if nargout

[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});

else

gui_mainfcn(gui_State, varargin{:});

end

% End initialization code

% --- Executes just before guidemo is made visible.

function guidemo_OpeningFcn(hObject, eventdata, handles, varargin)

% This function has no output args, see OutputFcn.

% hObject handle to figure

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

% varargin command line arguments to guidemo (see VARARGIN)

% Choose default command line output for guidemo

handles.output = hObject;

a=ones(256,256);

axes(handles.axes6);

imshow(a);

axes(handles.axes7);

imshow(a);

axes(handles.axes8);

imshow(a);

axes(handles.axes9);

imshow(a);

axes(handles.axes10);

imshow(a);

axes(handles.axes11);

imshow(a);

axes(handles.axes12);

imshow(a);

axes(handles.axes13);

imshow(a);

axes(handles.axes14);

imshow(a);

axes(handles.axes15);

imshow(a);

axes(handles.axes16);

imshow(a);

% Update handles structure

guidata(hObject, handles);

% --- Outputs from this function are returned to the command line.

function varargout = guidemo_OutputFcn(hObject, eventdata, handles)

% varargout cell array for returning output args (see VARARGOUT);

% hObject handle to figure

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

% Get default command line output from handles structure

varargout{1} = handles.output;

% --- Executes on button press in Browse.

function Browse_Callback(hObject, eventdata, handles)

% hObject handle to Browse (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

[filename, pathname] = uigetfile('*.pgm', 'Pick an image');

if isequal(filename,0) | isequal(pathname,0)

warndlg('Image is not selected');

else

a=imread(filename);

axes(handles.axes16);

imshow(a);

handles.inputimage = a;

% Update handles structure

guidata(hObject, handles);

end

% --- Executes on button press in Add_Database.

function Add_Database_Callback(hObject, eventdata, handles)

% hObject handle to Add_Database (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

data;

helpdlg('image successfully added to database');

% --- Executes on button press in Remove_Database.

function Remove_Database_Callback(hObject, eventdata, handles)

% hObject handle to Remove_Database (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

ButtonName = questdlg('Do you want to delete database?', ...

'Genie Question', ...

'Yes','No');

D='Yes'

if ButtonName == D

delete database.txt;

delete recognition.txt;

helpdlg('database deleted succesfully');

end

% --- Executes on button press in Recognition.

function Recognition_Callback(hObject, eventdata, handles)

% hObject handle to Recognition (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

inputimage = handles.inputimage ;

r_image=inputimage;

queryEnergies_r = obtainEnergies(r_image,6); % Obtain top 6 energies of the IRdemo.

fid = fopen('database.txt');

fresultValues = []; % Results matrix...

fresultNames = {};

i = 1; % Indices...

j = 1;

while 1

imagename = fgetl(fid);

if ~ischar(imagename), break, end % Meaning: End of File...

[X] = imread(imagename);

r1_image=X;

imageEnergies_r = obtainEnergies(r1_image,6);

E_r = euclideanDistance(queryEnergies_r, imageEnergies_r);

fresultValues_r(i) = E_r;

fresultNames_r(j) = {imagename};

i = i + 1;

j = j + 1;

end

fclose(fid);

[sortedValues_r, index_r] = sort(fresultValues_r); % Sorted results... the vector index

if sortedValues_r(1) == 0

fid = fopen('recognition.txt', 'w+'); % Create a file, over-write old ones.

for i = 1:10 % Store top 5 matches...

imagename = char(fresultNames_r(index_r(i)));

fprintf(fid, '%s\r', imagename);

disp(imagename);

disp(sortedValues_r(i));

disp(' ');

end

fclose(fid);

filename='recognition.txt';

fid = fopen(filename);

i = 1; % Subplot index on the figure...

while 1

imagename = fgetl(fid);

if ~ischar(imagename), break, end % Meaning: End of File...

[x, map] = imread(imagename);

% subplot(4,5,i);

if i==1;

axes(handles.axes11);

imshow(x);

end

if i==2

axes(handles.axes12);

imshow(x);

end

if i==3

axes(handles.axes13);

imshow(x);

end

if i==4

axes(handles.axes14);

imshow(x);

end

if i==5

axes(handles.axes15);

imshow(x);

end

if i==6

axes(handles.axes6);

imshow(x);

end

if i==7

axes(handles.axes7);

imshow(x);

end

if i==8

axes(handles.axes8);

imshow(x);

end

if i==9

axes(handles.axes9);

imshow(x);

end

if i==10

axes(handles.axes10);

imshow(x);

end

i = i + 1;

end

fclose(fid);

set(handles.text2,'String','Authorised Person');

else

set(handles.text2,'String','UnAuthorised Person');

% displayResults1('textureResults_b.txt', 'Texture Results_b...');

end

handles.sortedValues_r = sortedValues_r;

guidata(hObject,handles)

% --- Executes on button press in clear.

function clear_Callback(hObject, eventdata, handles)

% hObject handle to clear (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

a=ones(256,256);

axes(handles.axes6);

imshow(a);

axes(handles.axes7);

imshow(a);

axes(handles.axes8);

imshow(a);

axes(handles.axes9);

imshow(a);

axes(handles.axes10);

imshow(a);

axes(handles.axes11);

imshow(a);

axes(handles.axes12);

imshow(a);

axes(handles.axes13);

imshow(a);

axes(handles.axes14);

imshow(a);

axes(handles.axes15);

imshow(a);

axes(handles.axes16);

imshow(a);

set(handles.text2,'String',' ');

set(handles.c,'String','0');

delete 'database.txt';

delete 'recognition.txt';

% --- Executes on button press in exit.

function exit_Callback(hObject, eventdata, handles)

% hObject handle to exit (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

exit;

% --- Executes on button press in View_Database.

function View_Database_Callback(hObject, eventdata, handles)

% hObject handle to View_Database (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

open 'database.txt';

% --- Executes on button press in View_Recognition.

function View_Recognition_Callback(hObject, eventdata, handles)

% hObject handle to View_Recognition (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

open 'recognition.txt';

% --- Executes on button press in CLASSIFIER.

function CLASSIFIER_Callback(hObject, eventdata, handles)

% hObject handle to CLASSIFIER (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

filename='recognition.txt';

fid = fopen(filename);

i = 1; % Subplot index on the figure...

a1 = 'A';

a2 = 'B';

a3 = 'C';

a4 = 'D';

a5 = 'E';

count1 = 0;

count2 = 0;

count3 = 0;

count4 = 0;

count5 = 0;

% while 1

imagename = fgetl(fid);

b=imagename;

% switch imagename

i = i + 1;

cluster1=strncmp(a1,b,1);

cluster2=strncmp(a2,b,1);

cluster3=strncmp(a3,b,1);

cluster4=strncmp(a4,b,1);

cluster5=strncmp(a5,b,1);

if cluster1 == 1

count1=count1+1;

elseif cluster2 == 1

count2=count2+1;

elseif cluster3 == 1

count3=count3+1;

elseif cluster4 == 1

count4=count4+1;

elseif cluster5 == 1

count5=count5+1;

end

fclose(fid);

Out =[]

Out =[count1 count2 count3 count4 count5];

[Res ind]=max(Out);

disp(Out);

if ind == 1

disp('class A');

A='A';

set(handles.c,'String',A);

end

if ind == 2

disp('class B');

A='B';

set(handles.c,'String',A);

end

if ind == 3

disp('class C');

A='C';

set(handles.c,'String',A);

end

if ind == 4

disp('class D');

A='D';

set(handles.c,'String',A);

end

if ind == 5

disp('class E');

A='E';

set(handles.c,'String',A);

end

% --- Executes during object creation, after setting all properties.

function C_CreateFcn(hObject, eventdata, handles)

% hObject handle to C (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles empty - handles not created until after all CreateFcns called

if ispc

set(hObject,'BackgroundColor','white');

else

set(hObject,'BackgroundColor',get(0,'defaultUicontrolBackgroundColor'));

end

function C_Callback(hObject, eventdata, handles)

% hObject handle to C (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of C as text

% str2double(get(hObject,'String')) returns contents of C as a double

% --- Executes during object creation, after setting all properties.

function c_CreateFcn(hObject, eventdata, handles)

% hObject handle to c (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles empty - handles not created until after all CreateFcns called

% Hint: edit controls usually have a white background on Windows.

% See ISPC and COMPUTER.

if ispc

set(hObject,'BackgroundColor','white');

else

set(hObject,'BackgroundColor',get(0,'defaultUicontrolBackgroundColor'));

end

function c_Callback(hObject, eventdata, handles)

% hObject handle to c (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of c as text

% str2double(get(hObject,'String')) returns contents of c as a double

GUI:

A graphical user interface (GUI) is a user interface built with graphical objects, such as buttons, text fields, sliders, and menus. In general, these objects already have meanings to most computer users. For example, when you move a slider, a value changes; when you press an OK button, your settings are applied and the dialog box is dismissed. Of course, to leverage this built-in familiarity, you must be consistent in how you use the various GUI-building components.

Applications that provide GUIs are generally easier to learn and use since the person using the application does not need to know what commands are available or how they work. The action that results from a particular user action can be made clear by the design of the interface.

The sections that follow describe how to c

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.