# History And Applications Of Matrices Engineering Essay

**Published:** **Last Edited:**

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Matrices find many applications at current time and very useful to us. Physics makes use of matrices in various domains, for example in geometrical optics and matrix mechanics; the latter led to studying in more detail matrices with an infinite number of rows and columns. Graph theory uses matrices to keep track of distances between pairs of vertices in a graph. Computer graphics uses matrices to project 3-dimensional space onto a 2-dimensional screen.

## Example of application

A message is converted into numeric form according to some scheme. The easiest scheme is to let space=0, A=1, B=2, ..., Y=25, and Z=26. For example, the message "Red Rum" would become 18, 5, 4, 0, 18, 21, 13.

This data was placed into matrix form. The size of the matrix depends on the size of the encryption key. Let's say that our encryption matrix (encoding matrix) is a 2x2 matrix. Since I have seven pieces of data, I would place that into a 4x2 matrix and fill the last spot with a space to make the matrix complete. Let's call the original, unencrypted data matrix A.

## Â

## Â

18

5

## Â

A =

## Â

4

0

## Â

## Â

18

21

## Â

## Â

## Â

13

0

## Â

There is an invertible matrix which is called the encryption matrix or the encoding matrix. We'll call it matrix B. Since this matrix needs to be invertible, it must be square.

This could really be anything, it's up to the person encrypting the matrix. I'll use this matrix.

B =

## Â

4

-2

## Â

## Â

-1

3

## Â

The unencrypted data is then multiplied by our encoding matrix. The result of this multiplication is the matrix containing the encrypted data. We'll call it matrix X.

## Â

## Â

67

-21

## Â

X = A B =

## Â

16

-8

## Â

## Â

51

27

## Â

## Â

## Â

52

-26

## Â

The message that you would pass on to the other person is the the stream of numbers 67, -21, 16, -8, 51, 27, 52, -26.

## Decryption Process

Place the encrypted stream of numbers that represents an encrypted message into a matrix.

Multiply by the decoding matrix. The decoding matrix is the inverse of the encoding matrix.

Convert the matrix into a stream of numbers.

Conver the numbers into the text of the original message.

DETERMINANTS

The determinant of a matrix A is denoted det(A), or without parentheses: detÂ A. An alternative notation, used for compactness, especially in the case where the matrix entries are written out in full, is to denote the determinant of a matrix by surrounding the matrix entries by vertical bars instead of the usual brackets or parentheses.

For a fixed nonnegative integer n, there is a unique determinant function for the nÃ-n matrices over any commutative ring R. In particular, this unique function exists when R is the field of real or complex numbers.

For any square matrix of order 2, we have found a necessary and sufficient condition for invertibility. Indeed, consider the matrix

Example.Â EvaluateÂ

Let us transform this matrix into a triangular one through elementary operations. We will keep the first row and add to the second one the first multiplied byÂ . We getÂ

Using the Property 2, we getÂ

Therefore, we haveÂ

which one may check easily.Â

## EIGEN VALUES AND EIGEN VECTORS

InÂ mathematics,Â eigenvalue,Â eigenvector, andÂ eigenspaceÂ are related concepts in the field ofÂ linear algebra. The prefixÂ eigen-Â is adopted from the German word "eigen" for "innate", "idiosyncratic", "own".Â LinearÂ algebraÂ studiesÂ linear transformations, which are represented byÂ matricesÂ acting on vectors. Eigenvalues, eigenvectors and eigenspaces are properties of aÂ matrix. They are computed by a method described below, give important information about the matrix, and can be used inÂ matrix factorization. They have applications in areas of applied mathematics as diverse as economicsÂ andÂ quantum mechanics.

In general, a matrix acts on aÂ vectorÂ by changing both itsÂ magnitudeÂ and itsÂ direction. However, a matrix may act on certain vectors by changing only their magnitude, and leaving their direction unchanged (or possibly reversing it). These vectors are theÂ eigenvectorsÂ of the matrix. A matrix acts on an eigenvector by multiplying its magnitude by a factor, which is positive if its direction is unchanged and negative if its direction is reversed. This factor is theÂ eigenvalueÂ associated with that eigenvector. AnÂ eigenspaceÂ is the set of all eigenvectors that have the same eigenvalue, together with the zero vector.

These concepts are formally defined in the language ofÂ matricesÂ andÂ linear transformations. Formally, ifÂ AÂ is a linear transformation, a non-null vector xÂ is an eigenvector ofÂ AÂ if there is a scalarÂ Î»Â such that

The scalarÂ Î»Â is said to be an eigenvalue ofÂ AÂ corresponding to the eigenvectorÂ x.

## Eigenvalues and Eigenvectors: An Introduction

The eigenvalue problem is a problem of considerable theoretical interest and wide-rangingÂ application. For example, this problem is crucial in solving systems of differential equations, analyzing population growth models, and calculating powers of matrices (in order to define the exponential matrix). Other areas such as physics, sociology, biology,Â economicsÂ and statistics have focused considerable attention on "eigenvalues" and "eigenvectors"-their applications and their computations. Before we give the formal definition, let us introduce these concepts on an example.Â

## Example.Â

Consider the matrixÂ

Consider the three column matricesÂ

We haveÂ

In other words, we haveÂ

Next consider the matrixÂ PÂ for which the columns areÂ C1,Â C2, andÂ C3, i.e.,Â

We haveÂ det(P) = 84. So this matrix is invertible. Easy calculations giveÂ

Next we evaluate the matrixÂ P-1AP. We leave the details to the reader to check that we haveÂ

In other words, we haveÂ

Using the matrix multiplication, we obtainÂ

which implies thatÂ AÂ is similar to a diagonal matrix. In particular, we haveÂ

forÂ . Note that it is almost impossible to findÂ A75Â directly from the original form ofÂ A.Â

This example is so rich of conclusions that many questions impose themselves in a natural way. For example, given a square matrixÂ A, how do we find column matrices which have similar behaviors as the above ones? In other words, how do we find these column matrices which will help find the invertible matrixÂ PÂ such thatÂ P-1APÂ is a diagonal matrix?Â

From now on, we will call column matricesÂ vectors. So the above column matricesÂ C1,Â C2, andÂ C3Â are now vectors. We have the following definition.Â

Definition.Â LetÂ AÂ be a square matrix. A non-zero vectorÂ CÂ is called anÂ eigenvectorÂ ofÂ AÂ if and only if there exists a number (real or complex)Â Â such thatÂ

If such a numberÂ Â exists, it is called anÂ eigenvalueÂ ofÂ A. The vectorÂ CÂ is called eigenvector associated to the eigenvalueÂ .Â

Remark.Â The eigenvectorÂ CÂ must be non-zero since we haveÂ

for any numberÂ .Â

Example.Â Consider theÂ matrixÂ

We have seen thatÂ

whereÂ

SoÂ C1Â is an eigenvector ofÂ AÂ associated to the eigenvalue 0.Â C2Â is an eigenvector ofÂ AÂ associated to the eigenvalue -4 whileÂ C3Â is an eigenvector ofÂ AÂ associated to the eigenvalue 3.Â

It may be interesting to know whether we found all the eigenvalues ofÂ AÂ in the above example. In the next page, we will discuss this question as well as how to find the eigenvalues of a square matrix.

PROOFS OF PROPERTIES OF EIGEN VALUES:::

## PROPERTY 1

## {Inverse of a matrix A exists if and only if zero is not an eigenvalue of A}

SupposeÂ AÂ is a square matrix. ThenÂ AÂ is singular if and only ifÂ Î»=0Â is an eigenvalue ofÂ A.Â

ProofÂ Â We have the following equivalences:

AÂ is singularÂ

â‡”there existsÂ xâ‰ 0,Â Ax=0Â

â‡”there existsÂ xâ‰ 0,Â Ax=0xÂ

â‡”Î»=0Â is an eigenvalue ofÂ AÂ

Since SINGULAR matrix A has eigenvalue and the inverse of a singular matrix does not exist this implies that for a matrix to be invertible its eigenvalues must be non-zero.

## PROPERTY-2

Eigenvalues of a matrix are real or complex conjugates in pairs

SupposeÂ AÂ is a square matrix with real entries andÂ xÂ is an eigenvector ofÂ AÂ for the

eigenvalueÂ Î». ThenÂ xÂ is an eigenvector ofÂ AÂ for the eigenvalueÂ Î».Â â-¡

## ProofÂ Â

AxÂ =AxÂ

=AxÂ

=Î»xÂ

=Î»xÂ

AÂ has real entriesÂ Â x Â eigenvector ofÂ AÂ

SupposeÂ AÂ is anÂ mÃ-nÂ matrix andÂ BÂ is anÂ nÃ-pÂ matrix. ThenÂ AB=AB.Â â-¡

ProofÂ Â To obtain this matrix equality, we will work entry-by-entry. ForÂ 1â‰¤iâ‰¤m,Â 1â‰¤jâ‰¤p,

ABijÂ =ABijÂ =âˆ‘nk=1AikBkjÂ =âˆ‘nk=1AikBkjÂ =âˆ‘nk=1AikBkjÂ =âˆ‘nk=1AikBkjÂ =ABijÂ

## APPLICATION OF EIGEN VALUES IN FACIAL RECOGNITION

## How does it work?

The task of facial recogniton is discriminating input signals (image data) into severalÂ classesÂ (persons). The input signals are highly noisy (e.g. the noise is caused by differing lighting conditions, pose etc.), yet the input images are not completely random and in spite of their differences there are patterns which occur in any input signal. Such patterns, which can be observed in all signals could be - in the domain of facial recognition - the presence of some objects (eyes, nose, mouth) in any face as well as relative distances between these objects. These characteristic features are calledÂ eigenfacesÂ in the facial recognition domain (orÂ principalÂ componentsÂ generally). They can be extracted out of original image data by means of a mathematical tool calledÂ Principal Component AnalysisÂ (PCA).Â

By means of PCA one can transform each original image of the training set into a corresponding eigenface. An important feature of PCA is that one can reconstruct reconstruct any original image from the training set by combining the eigenfaces. Remember that eigenfaces are nothing less than characteristic features of the faces. Therefore one could say that the original face image can be reconstructed from eigenfaces if one adds up all the eigenfaces (features) in the right proportion. Each eigenface represents only certain features of the face, which may or may not be present in the original image. If the feature is present in the original image to a higher degree, the share of the corresponding eigenface in the "sum" of the eigenfaces should be greater. If, contrary, the particular feature is not (or almost not) present in the original image, then the corresponding eigenface should contribute a smaller (or not at all) part to the sum of eigenfaces. So, in order to reconstruct the original image from the eigenfaces, one has to build a kind of weighted sum of all eigenfaces. That is, the reconstructed original image is equal to a sum of all eigenfaces, with each eigenface having a certain weight. This weight specifies, toÂ what degreeÂ the specific feature (eigenface) is present in the original image.Â

If one uses all the eigenfaces extracted from original images, one can reconstruct the original images from the eigenfacesÂ exactly. But one can also use only a part of the eigenfaces. Then the reconstructed image is an approximation of the original image. However, one can ensure that losses due to omitting some of the eigenfaces can be minimized. This happens by choosing only the most important features (eigenfaces). Omission of eigenfaces is necessary due to scarcity of computational resources.Â

How does this relate to facial recognition? The clue is that it is possible not only to extract the face from eigenfaces given a set of weights, but also to go the opposite way. This opposite way would be to extract the weights from eigenfaces and the face to be recognized. These weights tell nothing less, as the amount by which the face in question differs from "typical" faces represented by the eigenfaces. Therefore, using this weights one can determine two important things:

Â Determine, if the image in question is a face at all. In the case the weights of the image differ too much from the weights of face images (i.e. images, from which we know for sure that they are faces), the image probably is not a face.

Â Similar faces (images) possess similar features (eigenfaces) to similar degrees (weights). If one extracts weights from all the images available, the images could be grouped to clusters. That is, all images having similar weights are likely to be similar faces.