In Linear Algebra Gaussian Elimination Accounting Essay


In linear algebra, Gaussian elimination is an algorithm for solving systems of linear equations, finding the rank of a matrix, and calculating the inverse of an invertible square matrix. Gaussian elimination is named after German mathematician and scientist Carl Friedrich Gauss, which makes it an example of Stigler's law.

Elementary row operations are used to reduce a matrix to row echelon form. Gauss-Jordan elimination, an extension of this algorithm, reduces the matrix further to reduced row echelon form. Gaussian elimination alone is sufficient for many applications, and is cheaper than the -Jordan version

The method of Gaussian elimination appears in Chapter Eight, Rectangular Arrays, of the important Chinese mathematical text Jiuzhang suanshu or The Nine Chapters on the Mathematical Art. Its use is illustrated in eighteen problems, with two to five equations. The first reference to the book by this title is dated to 179 CE, but parts of it were written as early as approximately 150 BCE.[1] It was commented on by Liu Hui in the 3rd century.

Lady using a tablet
Lady using a tablet


Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

The method in Europe stems from the notes of Isaac Newton.[2] In 1670, he wrote that all the algebra books known to him lacked a lesson for solving simultaneous equations, which Newton then supplied. Cambridge University eventually published the notes as Arithmetica Universalis in 1707 long after Newton left academic life. The notes were widely imitated, which made (what is now called) Gaussian elimination a standard lesson in algebra textbooks by the end of the 18th century. Carl Friedrich Gauss in 1810 devised a notation for symmetric elimination that was adopted in the 19th century by professional hand computers to solve the normal equations of least-squares problems. The algorithm that is taught in high school was named for Gauss only in the 1950s as a result of confusion over the history of the subject.

The process of Gaussian elimination has two parts. The first part (Forward Elimination) reduces a given system to either triangular or echelon form, or results in a degenerate equation with no solution, indicating the system has no solution. This is accomplished through the use of elementary row operations. The second step uses back substitution to find the solution of the system above.

Stated equivalently for matrices, the first part reduces a matrix to row echelon form using elementary row operations while the second reduces it to reduced row echelon form, or row canonical form.

Another point of view, which turns out to be very useful to analyze the algorithm, is that Gaussian elimination computes a matrix decomposition. The three elementary row operations used in the Gaussian elimination (multiplying rows, switching rows, and adding multiples of rows to other rows) amount to multiplying the original matrix with invertible matrices from the left. The first part of the algorithm computes an LU decomposition, while the second part writes the original matrix as the product of a uniquely determined invertible matrix and a uniquely determined reduced row-echelon matrix.

It is quite hard to solve non-linear systems of equations, while linear systems are quite easy to study. There are numerical techniques which help to approximate nonlinear systems with linear ones in the hope that the solutions of the linear systems are close enough to the solutions of the nonlinear systems. We will not discuss this here. Instead, we will focus our attention on linear systems.

For the sake of simplicity, we will restrict ourselves to three, at most four, unknowns. The reader interested in the case of more unknowns may easily extend the following ideas.

Definition. The equation

a x + b y + c z + d w = h

where a, b, c, d, and h are known numbers, while x, y, z, and w are unknown numbers, is called a linear equation. If h =0, the linear equation is said to be homogeneous. A linear system is a set of linear equations and a homogeneous linear system is a set of homogeneous linear equations.

For example,



\begin{displaymath}\left\{ \begin{array}{lll} x+y-z &=& 1\\ x+3y + 3 z&=& -2\\ \end{array} \right.\end{displaymath}

are linear systems, while

\begin{displaymath}\left\{ \begin{array}{lll} 2x - 3y^2 &=& -1\\ x+y + z &=& 1\\ \end{array} \right.\end{displaymath}

Lady using a tablet
Lady using a tablet


Writing Services

Lady Using Tablet

Always on Time

Marked to Standard

Order Now

is a nonlinear system (because of y2). The system

\begin{displaymath}\left\{ \begin{array}{lllllllll} 2x& - &3y& -&3 z &+ &w &=&0\\ x&+&3y&&&& &=& 0\\ x &-& y& +&&& w &=&0\\ \end{array} \right.\end{displaymath}

is an homogeneous linear system.

Matrix Representation of a Linear System

Matrices are helpful in rewriting a linear system in a very simple form. The algebraic properties of matrices may then be used to solve systems. First, consider the linear system

\begin{displaymath}\left\{ \begin{array}{lllllll} ax + by + cz + dw &=& e\\ fx ... ... + nw &=& p\\ qx + ry + sz + tw &=& u\\ \end{array} \right. .\end{displaymath}

Set the matrices

\begin{displaymath}A = \left( \begin{array}{lllllll} a& b& c& d\\ f& g& h&i\\ ... ...\begin{array}{lllllll} x\\ y\\ z\\ w\\ \end{array} \right).\end{displaymath}

Using matrix multiplications, we can rewrite the linear system above as the matrix equation

\begin{displaymath}A\cdot X=C.\end{displaymath}

As you can see this is far nicer than the equations. But sometimes it is worth to solve the system directly without going through the matrix form. The matrix A is called the matrix coefficient of the linear system. The matrix C is called the nonhomogeneous term. When , the linear system is homogeneous. The matrix X is the unknown matrix. Its entries are the unknowns of the linear system. The augmented matrix associated with the system is the matrix [A|C], where

\begin{displaymath}[A\vert C]= = \left( \begin{array}{llll \vert l} a& b& c& d&e... ...& g& h&i&f\\ k& l& m&n&p\\ q&r&s& t&u\\ \end{array} \right).\end{displaymath}

In general if the linear system has n equations with m unknowns, then the matrix coefficient will be a nxm matrix and the augmented matrix an nx(m+1) matrix. Now we turn our attention to the solutions of a system.

Definition. Two linear systems with n unknowns are said to be equivalent if and only if they have the same set of solutions.

This definition is important since the idea behind solving a system is to find an equivalent system which is easy to solve. You may wonder how we will come up with such system? Easy, we do that through elementary operations. Indeed, it is clear that if we interchange two equations, the new system is still equivalent to the old one. If we multiply an equation with a nonzero number, we obtain a new system still equivalent to old one. And finally replacing one equation with the sum of two equations, we again obtain an equivalent system. These operations are called elementary operations on systems. Let us see how it works in a particular case.

Example. Consider the linear system

\begin{displaymath}\left\{ \begin{array}{lll} x+y+z&=& 0\\ x-2y + 2z&=& 4\\ x+2y -z&=& 2\\ \end{array} \right.\end{displaymath}

The idea is to keep the first equation and work on the last two. In doing that, we will try to kill one of the unknowns and solve for the other two. For example, if we keep the first and second equation, and subtract the first one from the last one, we get the equivalent system

\begin{displaymath}\left\{ \begin{array}{ccccccc} x+&y+z&=& 0\\ x-&2y + 2z&=& 4\\ &y -2z&=& 2\\ \end{array} \right.\end{displaymath}

The idea is to keep the first equation and work on the last two. In doing that, we will try to kill one of the unknowns and solve for the other two. For example, if we keep the first and second equation, and subtract the first one from the last one, we get the equivalent system

\begin{displaymath}\left\{ \begin{array}{ccccccc} x+&y+z&=& 0\\ x-&2y + 2z&=& 4\\ &y -2z&=& 2\\ \end{array} \right.\end{displaymath}

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

Next we keep the first and the last equation, and we subtract the first from the second. We get the equivalent system

\begin{displaymath}\left\{ \begin{array}{ccccccc} x+&y+z&=& 0\\ -&3y + z&=& 4\\ &y -2z&=& 2\\ \end{array} \right.\end{displaymath}

Now we focus on the second and the third equation. We repeat the same procedure. Try to kill one of the two unknowns (y or z). Indeed, we keep the first and second equation, and we add the second to the third after multiplying it by 3. We get

\begin{displaymath}\left\{ \begin{array}{ccccccc} x+&y+&z&=& 0\\ -&3y + &z&=& 4\\ &-&5z&=& 10\\ \end{array} \right.\end{displaymath}

Going from the last equation to the first while solving for the unknowns is called backsolving. 

Keep in mind that linear systems for which the matrix coefficient is upper-triangular are easy to solve. This is particularly true, if the matrix is in echelon form. So the trick is to perform elementary operations to transform the initial linear system into another one for which the coefficient matrix is in echelon form. 

Using our knowledge about matrices, is there anyway we can rewrite what we did above in matrix form which will make our notation (or representation) easier? Indeed, consider the augmented matrix

\begin{displaymath}\left( \begin{array}{rrr\vert r} 1&1&1& 0\\ 1&-2&2& 4\\ 1&2&-1& 2\\ \end{array} \right).\end{displaymath}

Let us perform some elementary row operations on this matrix. Indeed, if we keep the first and second row, and subtract the first one from the last one we get

\begin{displaymath}\left( \begin{array}{rrr\vert r} 1&1&1& 0\\ 1&-2&2& 4\\ 0&1&-2& 2\\ \end{array} \right).\end{displaymath}

Next we keep the first and the last rows, and we subtract the first from the second. We get

\begin{displaymath}\left( \begin{array}{rrr\vert r} 1&1&1& 0\\ 0&-3&1& 4\\ 0&1&-2& 2\\ \end{array} \right).\end{displaymath}

Then we keep the first and second row, and we add the second to the third after multiplying it by 3 to get

\begin{displaymath}\left( \begin{array}{rrr\vert r} 1&1&1& 0\\ 0&-3&1& 4\\ 0&0&-5& 10\\ \end{array} \right).\end{displaymath}

This is a triangular matrix which is not in echelon form. The linear system for which this matrix is an augmented one is

\begin{displaymath}\left\{ \begin{array}{ccccccc} x+&y+&z&=& 0\\ -&3y + &z&=& 4\\ &-&5z&=& 10\\ \end{array} \right.\end{displaymath}

As you can see we obtained the same system as before. In fact, we followed the same elementary operations performed above. In every step the new matrix was exactly the augmented matrix associated to the new system. This shows that instead of writing the systems over and over again, it is easy to play around with the elementary row operations and once we obtain a triangular matrix, write the associated linear system and then solve it. This is known as Gaussian Elimination. Let us summarize the procedure:

Gaussian Elimination. Consider a linear system.

1.Construct the augmented matrix for the system;

2.Use elementary row operations to transform the augmented matrix into a triangular one;

3.Write down the new linear system for which the triangular matrix is the associated augmented matrix;

4.Solve the new system. You may need to assign some parametric values to some unknowns, and then apply the method of back substitution to solve the new system.

Example. Solve the following system via Gaussian elimination

\begin{displaymath}\left\{ \begin{array}{ccccccc} 2x-&3 y-z + &2w& + 3 v&=&4\\ ... ...+ &2w& - v &=&9\\ &2y + z &&+ 4 v &=&-5\\ \end{array} \right.\end{displaymath}

The augmented matrix is

\begin{displaymath}\left( \begin{array}{rrrrr\vert r} 2&-3&-1& 2&3&4\\ 4&-4&-1&... ...1&4\\ 2&-5&-2& 2&-1&9\\ 0&2&1& 0&4&-5\\ \end{array} \right).\end{displaymath}

We use elementary row operations to transform this matrix into a triangular one. We keep the first row and use it to produce all zeros elsewhere in the first column. We have

\begin{displaymath}\left( \begin{array}{rrrrr\vert r} 2&-3&-1& 2&3&4\\ 0&2&1& 0&5&-4\\ 0&-2&-1& 0&-4&5\\ 0&2&1& 0&4&-5\\ \end{array} \right).\end{displaymath}

Next we keep the first and second row and try to have zeros in the second column. We get

\begin{displaymath}\left( \begin{array}{rrrrr\vert r} 2&-3&-1& 2&3&4\\ 0&2&1& 0&5&-4\\ 0&0&0& 0&1&1\\ 0&0&0& 0&-1&-1\\ \end{array} \right).\end{displaymath}

Next we keep the first three rows. We add the last one to the third to get

\begin{displaymath}\left( \begin{array}{rrrrr\vert r} 2&-3&-1& 2&3&4\\ 0&2&1& 0&5&-4\\ 0&0&0& 0&1&1\\ 0&0&0& 0&0&0\\ \end{array} \right).\end{displaymath}

This is a triangular matrix. Its associated system is

\begin{displaymath}\left\{ \begin{array}{cccccccccc} 2x-&3 y&-&z &+ &2w& + &3 v&=&4\\ &2y&+&z &&&+&5 v&=&-4\\ &&&&&&&v&=&1 \end{array} \right.\end{displaymath}

Clearly we have v = 1. Set z=s and w=t, then we have

\begin{displaymath}y = -2 -\frac{1}{2} z - \frac{5}{2} v = -\frac{9}{2} - \frac{1}{2} s.\end{displaymath}

The first equation implies


Using algebraic manipulations, we get

Putting all the stuff together, we have

\begin{displaymath}\left( \begin{array}{rrrrrrr} \phantom{\frac{1}{1}}x\phantom{... ...{2} - \frac{1}{2} s\\ \\ s\\ t\\ 1\\ \end{array} \right) .\end{displaymath}


Suppose the goal is to find and describe the solution(s), if any, of the following system of linear equations:C:\Users\Vipin\Desktop\EQ.png

The algorithm is as follows: eliminate x from all equations below L1, and then eliminate y from all equations below L2. This will put the system into triangular form. Then, using back-substitution, each unknown can be solved for.

In the example, x is eliminated from L2 by adding \begin{matrix}\frac{3}{2}\end{matrix} L_1to L2. x is then eliminated from L3 by adding L1 to L3. Formally:

L_2 + \frac{3}{2}L_1 \rightarrow L_2

L_3 + L_1 \rightarrow L_3

The result is:

\begin{alignat}{7} 2x &&\; + \;&& y &&\; - \;&& z &&\; = \;&& 8 & \\ \frac{1}{2}y &&\; + \;&& \frac{1}{2}z &&\; = \;&& 1 & \\ 2y &&\; +\;&& z &&\; = \;&& 5 & \end{alignat}

Now y is eliminated from L3 by adding − 4L2 to L3:

L_3 + -4L_2 \rightarrow L_3

The result is:

\begin{alignat}{7} 2x &&\; + \;&& y &&\; - \;&& z &&\; = \;&& 8 & \\ \frac{1}{2}y &&\; + \;&& \frac{1}{2}z &&\; = \;&& 1 & \\ -z &&\; = \;&& 1 & \end{alignat}

This result is a system of linear equations in triangular form, and so the first part of the algorithm is complete.

The last part, back-substitution, consists of solving for the knowns in reverse order. It can thus be seen that

z = -1 \quad (L_3)

Then, z can be substituted into L2, which can then be solved to obtain

y = 3 \quad (L_2)

Next, z and y can be substituted into L1, which can be solved to obtain

x = 2 \quad (L_1)

The system is solved


Step 1:

Locate the leftmost column not consisting completely of zeros.

Step 2:

Replace the top row with a new row if necessary to bring a non-zero entry to the top of this column.

Step 3:

If this non-zero entry is a multiply the first row by 1/a to introduce a leading 1.

Step 4:

Add suitable multiples of the top row to all the other rows in order to have only zero entries in the column below this leading 1.

Step 5:

Now ignore the top row of the matrix and begin again with step 1 applied to the sub matrix that remains. Continue until the whole matrix is in row-echelon form.

Step 6:

Then opening with the last non-zero row and working upwards, add suitable multiples of each row to the rows above in order to introduce zeroes above the leading 1s.

These are the steps used to solve the gauss-jordan method.