# History Of Perturbation Methods Accounting Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

This report gives an insight into the Mathematics applied in computer programs and finance. It is important to give an accurate approximation however; ones which are learnt by studying Numerical Methods although thorough, take a long time, Perturbation Methods do not take as long which this report details.

The idea of Perturbation Methods is to find solutions with the smallest amount of error, they're important because without them some mathematical questions would be extremely hard to solve.

The research in this project is to give an overview of the Methods covering several aspects in the scientific area. The applications apply to many sciences such as chemistry and computer science and it is a fundamental part of Mathematics.

The resources used in this report have used many different journals and online articles to give an overview of what Perturbation Methods are and their applications.

## 1.0.0Introduction

## 1.1.0 History of Perturbation Methods

Perturbation methods were first used in the 16th century; there are several great historians who are known for their findings by using Perturbation Methods.http://2.bp.blogspot.com/_IhmxTV1l3G0/TTvmPjZqg4I/AAAAAAAAABU/pActXJsqmCI/s1600/isaac_newton_hd.jpg

## 1.1.1 Isaac Newton

Although very famous for his knowledge into physics and theology, Isaac Newton was a mathematician who built the foundations of Perturbation Methods with his ideas of gravity in celestial mechanics. His mathematical laws built a gateway for the methods to be found.

http://disabledlives.blogspot.co.uk/2011/01/isaac-newton-1643-1727.htmlHis ideas were built upon actual evidence and allowed further understanding of the solar system, and henceforth mathematical calculations for orbits of entities in space (Johnson, 2012).

It later became clear with Newton's laws of gravitation that given with sufficient information a Fourier analysis could be constructed. Of course as Humanity has been collecting information for many years evidence was available and therefore the analysis for celestial mechanics was born.http://www-history.mcs.st-and.ac.uk/BigPictures/Taylor.jpeg

## 1.1.2 Brook Taylor

Brook Taylor is a fundamental Mathematician famous not for perturbation methods exactly, but for his Mathematical expansions used in the methods to date.

He worked with Newtonian Mechanics and developed important roles in the development of calculus (Britannica, 2009). Showing promise by 1714 for his mathematics in numerical analysis he had his first paper published with involved a look at the oscillation of the body.

http://www-history.mcs.st-and.ac.uk/PictDisplay/Taylor.htmlAlthough somewhat a controversial character, he went into an argument with the powerful Mathematician Johann Bernoulli over his paper being what Johann called plagiarised and nothing new (The MacTutor History of Mathematics Archive, 2006).

However what is most important in the view of perturbation methods was his paper "Direct and Indirect Methods of Incrimination", introducing the finite differences in calculus in 1715, this allowed further important research for approximations, bringing about Taylor's Theorem.

## 1.1.3 Edmund Halley

Edmund Halley, known periodically for finding Halley's Comet, used the foundations of Newtonian laws of gravity to calculate when Halley's Comet would next show in the skies. Of course some margin for error is included in his calculations, however this it was the first time that with Celestial Mechanics a perturbed problem was solved.

3 http://starchild.gsfc.nasa.gov/docs/StarChild/solar_system_level2/edmond_halley.htmlEdmund's method, although at the time questioned, forecasted the entity to pass over in 1758. Although some error for other dimensional effects such as the gravitational pull from several were taken into account, was seen on the 25 December 1758, around one month from when Edmund worked out. With the use of Perturbation methods this was, for the time, a very good approximation, and the first correct use of Newtonian Laws of Physics in Celestial Mechanics. (Wikipedia, 2012)

## 1.1.4 Henry Poincaré

Henry Poincaré was a French mathematician who introduced the world to the idea of the asymptotic expansion, also interested in celestial mechanics and astronomy. As a successful astronomer of his day he looked into the motion of three different entities and how they would interact. Through this he involved the idea of an infinite series and the convergence of them.

Soon realising that the expansion for a three body principle was not integrable he realised therefore it could not be put into an algebraic form, and therefore was one of the biggest discoveries in celestial mechanics since Isaac Newton (Wikipedia, 2013).

4http://upload.wikimedia.org/wikipedia/commons/4/45/Henri_Poincar%C3%A9-2.jpgHe therefore introduced the ideas for the base of chaos theory.

## 1.2.0 What are Perturbation Methods?

Sometimes when given a mathematical problem the solution is not clear and may take some time to work it out, especially for ranging amounts of offsetting applied causing the given equation to be perturbed by a variable Îµ, therefore the problem cannot be exactly and as said in Introduction to regular perturbation theory by Eric Vanden-Eijnden "Then, one may wonder how this solution is altered for non-zero but small Îµ" (Vanden-Ejinden, n.d.). In case such as these perturbation Methods are needed to find an approximation.

Perturbation Methods are a form of Analytical Methods that can be used to aid Mathematicians in a very big problem, which, in most cases, can cause mass frustration, this problem, is in the form of an exact solution. Given an equation, which has been perturbed or changed in one way or another, the solution becomes increasingly difficult to solve, going from what may seem a simple quadratic equation all the way up to differential equations with large parameters, and however this is not to say Perturbation Methods do not have faults. They require a small parameter as the approximations can diverge quite quickly (M. Madani, 2012), this fault will be shown in chapter two.

When most Mathematicians look the art of approximating solutions, they use numerical methods rather than analytical methods, this is not to say that they are in competition, but they rather fill in where the other cannot. For example Numerical Methods struggle with extreme values and its margin for error is very wide, this is where Perturbation Methods can step in. As it is a form of asymptotic expansions, it has infinite terms and so its rate of convergence is not based on lines of working with step sizes as it is in the Runge Kutta Method, but in line with the amount of terms it has in its approximation.

As mentioned by George Adomian that it is applied to a physical problems but with the linearization used to simplify the problem it makes it realistically no longer a real representation of the physical problem, however this is not to say that it cannot give a general approximation (Adomian, 1995).

## 1.3.0 Asymptotic Expansions

The use of asymptotic expansions in perturbation methods is a very important concept. The key idea of an asymptotic expansion is that the amount of terms in infinite, however this is not to say that the more you expand the more precise it can become, with too many terms adding on after a while the approximation can diverge. As mentioned in Dynamical systems with applications using Maple by Stephen Lynch "asymptotic expansions often do not converge, however, one-term and two-term approximations provide an analytical expression that is dependent on the parameter Îµ, and some initial conditions" (Lynch, 2010).

An expansion however is clearly shown from the use of the Taylor series, showing an infinite amount of terms should generally add to make a precise approximation.

## Notation

Asymptotic expansion there arises a certain notation which will be used in some areas of this project. This notation seems like a big O. it stands for a behaviour which is limiting the function. As the method described in this project is asymptotic it is used to group algorithms to how they respond, and there growth rate.

## Singular and Regular Perturbation Techniques?

Perturbation methods come in two types, Singular and Regular. The reason for this is because the system which is to be solved can either be simplified or be kept as it is. The reason for this is that in a simplified system the parameters which are used in the expansion can be neglected and therefore are simplifying the problem. By reducing and thus simplifying the system reduces its order and this is where the singular method is used. If the parameters are not ignored then it is a regular system of equations and so the regular method is used (Khalil, 1978).

## 2.0.0 Algebraic Equations

When using algebraic equations the most use the method of simplifying and then solving however others find it easier to expand then solve. It is the same for perturbation methods however as the solution cannot be shown in most cases as an integer the approximate algebraic x=x0+Îµ1x1+Îµ2x2â€¦ must be used.

The use of the asymptotic expansion is used when the approximation has an infinite number of terms. By adding these several terms in the asymptotic expansion it gives the name of an asymptotic series, however given that the series may well go onto to an infinite amount of terms, it must be stopped after a certain few in order to give a needed margin of error.

## 2.1.0 Quadratic Equations

The most basic form to solve with perturbation methods is the quadratic formula. With two solutions they are generally very easy to find and hence a good approximation can be found.

The quadratic formula usually comes in the form of,

The general steps to solve such an equation can be by simply simplifying it by factorisation, completing the square or using the quadratic formula. Below is some examples of using the above methods and expanding upon them to find an asymptotic approximation, this is done by using "Expand then Solve" or "Solve then Expand" methods.

## 2.1.1 Expand then Solve

## Example 1

Given an equation of a quadratic being

Equation 1.1

It is simple to solve with simplifying the equation , which would make it.

Equation 1.1

This would give the solutions to be,

Equation 1.1

However when perturbation methods are involved expansion then solving the equation is needed to complete the calculations. For example when Îµâ‰ 0, which is when the equation is "Perturbed".

The above equation could become

Equation 1.1

Solving this would mean finding values for x when Îµ gets bigger so does the error and so more terms are needed in the asymptotic expansion.

Therefore substituting in an equation for which works for all values of Îµ is

Equation 1.2

The above equation gives a good idea of suitable notation for approximating a value of x. By substituting in the above equation (Equation 1.1) it becomes.

Equation 1.1

By expanding each section of this equation and then substituting them in. The equation can become solvable to find each coefficient in equation 1.2.

Equation 1.1

Equation 1.1

By then substituting them back into equation 1.1 gives,

Equation 1.1

It is then the task of collecting like terms

Equation 1.1

The next step is to then set each Îµâ†’0 and therefore finding the first term of each expansion x0.

Therefore the first part of equation 1.3 becomes

## .

Equation 1.1

Equation 1.1

Let x0=-4

Equation 1.1

Equation 1.1

Let x0=-4 and x1=-7/3

Equation 1.1

Equation 1.1

By substituting this into the expanding approximation for x the approximation turns out to be

Equation 1.1

Let x0=-1

Equation 1.1

Solving this for x1,

Equation 1.1

Let x0=-1, x1= 1/3

Equation 1.1

Becomes substituting the vales of x0 and x1 makes the following equation

Equation 1.1

Equation 1.1

Which (by substituting the values found above into equation 1.1) makes the following which approximates the perturbed equation.

Equation 1.1

## 2.1.2 Solve then Expand

The asymptotic equation can be calculated by solving the equation first by using the quadratic formula. To show continuity it is best to use the same example. below is stated the equation for solving a general quadratic known as the quadratic formula

Equation 1.1

With the equation being

Equation 1.1

It is clear to see a=1, b=5+2Îµ and c=4+Îµ

By substituting these in the equation formula becomes

Equation 1.1

Or after solving

Equation 1.1

By expanding the above equations with the binomial expansion they become

Equation 1.1

Equation 1.1

Which to find the two values of x becomes

Equation 1.1

Equation 1.1

Equation 1.1

As shown in the example above.

## 2.2.0 Convergence of Asymptotic Equations -Algebraic Quadratic Equations

## 2.2.1 Convergence or Divergence and Error-Quadratic

One could use an asymptotic to find an equation for N terms where N tends s to infinity. However with the endless amount of terms needed to find it is best to define a certain margin of error and work to that. (Hinch, 1995)

Due to the human nature of error, computers do not completely understand the idea of working to a certain margin of error, of course this is down to the programmers skill there self, but nevertheless the larger the equation and the larger the value of x in some certain circumstances it is much quicker to do calculations by hand to find an asymptotic expansion. (Nayfeh, 1993).

A perturbed equation is

Equation 1.1

In order to see as Îµ gets bigger how does the solution diverge from the unperturbed solution x=-4, x=-1 a graph must be shown for the error.

Figure 1: x^2+(5+2*e)*x+4+e^10, x(0)=-4

This clearly shows that as Îµ gets bigger so does the error. The graph clear shows that the solution does increasingly diverge from the value of -4.

When compared with the equation in the example.

Figure 2: x^2+(5+2*e)x+4+e, x(0)=-4

This graph shows that the error curve is near being linear, with Îµ having a power of one like in the example, however although it shows that Figure one takes a while to gain a bigger gradient, the y-axis is in 10^-4, whereas Figure two is linear but not showing as higher margin of error.

Overall these graphs show for this particular problem that the larger the value of Îµ the greater the margin of error.

The rate of convergence with error could be shown with the more and more terms applied in the asymptotic expansion for when x=-1.

Equation 1.1

For example when Îµ=0.001, the solution for the above example will be

Equation 1.1

So by using the first part of the expansion which is minus one, the error is 0.003.

By adding on the next term of the asymptotic expansion which is Îµ/3

The substitution of

Equation 1.1

And so for a solution for four decimal places, the asymptotic expansion must only go up to two terms for there to be no margin of error.

However this may change, if we change the value of Îµ to be 0.515 for example. The solution for the above equation becomes

And the error is when substituting just the first value of the asymptotic approximation (-1) is 0.1240.

Then by adding on the next term of the asymptotic expansion

Equation 1.1

This gives the error of -0.0477.

Then add on the next stage of the asymptotic expansion as so far it is only converging to one decimal place.

Equation 1.1

This still gives to one decimal place of accuracy. Add on the next stage of the asymptotic expansion.

Equation 1.1

This is now diverging which can happen with an asymptotic approximation when the value of Îµ gets too big and so the approximation for x becomes invalid.

As with normal convergence the differential of the equation is a major factor in finding if something is convergent or not.

For example in the previous example.

Equation 1.1

By differentiating this we get.

Equation 1.1

Equation 1.1

Which is less than one and shows that these are unstable for when x=-4, however stable up to a point when x=-1. An important factor indeed when looking into the convergence or divergence of an asymptotic expansion.

The differential of the equation given the magnitude of the gradient will then give an insight into how many iterations it takes (i.e. how many terms it takes) to get a necessary degree of accuracy. (Hinch, 1995)

## 2.2.2 Comparison of Asymptotic Expansion to actual solution

As seen in the above graph with the value of Îµ getting bigger the expansion gradually diverges off the actual solution. An important part of the idea of solutions in comparison to perturbed equations is that the approximation can diverge with bigger values of Îµ as the parameter is bigger. Due to the parameter being dimensionless the method relies on the parameter being small so it can gain a more accurate solution (Simmonds & Mann Jr., 1997).

## 2.3.0 Equations with algebraic solutions

Of course in some equations some solutions can still remain unknown, however this is not to say that the expansion cannot be constructed, for this section a simple quadratic equation can made with one known solution and one unknown.

## 2.3.1 Expand then Solve

Of course in some circumstances the roots may be non-numerical. For this we should look into the following example where this factor be explored.

## Example: Algebraic Solution

At other times we could have an equation which is already simplified but one of the two solutions is unknown and therefore some integer could be a fraction, the below equation shows when Îµ=0.

Equation 1.1

This gives the solutions of

Equation 1.1

However what are the possible solutions when Îµâ‰ 0. The equation then becomes

Equation 1.1

And following like the above example we substitute in the below equation for approximating x.

Equation 1.1

Then substitute in two the equation for x again and repeat the above method, again to get the full method but not to make the working longer than it has to, this will again go to the second power again, this is called working to second order.

Equation 1.1

Equation 1.1

The next step is to collect like terms so take the equation on the right to the left and then collect terms. Again calculating up to second order.

Equation 1.1

We then let Îµâ†’0

The first section of the equation becomes

Equation 1.1

Equation 1.1

This gives the solutions of

Equation 1.1

Given x0=4

The second part of the equation becomes after dividing by Îµ

Equation 1.1

Equation 1.1

Equation 1.1

Equation 1.1

Equation 1.1

Equation 1.1

Equation 1.1

The above equation is one which Ï„ when gets closer to two the approximation breaks down as the solution will tend to infinity. In certain circumstances it does not have to be equal to two as in some corrections of the asymptotic expansion may give the need for it to be of another value to become a zeroth order term. This will mean that the zeroth order term will not be small despite usual assumptions used in analytical methods but quite large. To see which terms will break down and therefore not useful in our approximation we look to the equations which involve the usual statement -2. (Nayfeh, 1993).

## 2.3.2 Solve then Expand

For some equations it is a lot quicker to solve via first expanding then solving. For example the equation

can be solved by the method of perturbation methods too.

By substituting the separate coefficients which are.

Equation 1.1

Equation 1.1

Equation 1.1

Equation 1.1

By composing a binomial expansion this gives.

Equation 1.1

## 2.4.0 Cubic Equations

The next example to look at is the cubic equation. Even though there is an extra power, the method is generally the same.

## Example: Cubic Equation

Given the equation

Equation 1.1

This can easily be solved by factorising to make

Equation 1.1

Showing that

Equation 1.1

However when the equation is perturbed it could become

Equation 1.1

Again we use the approximation for x which is

Equation 1.1

Therefore we substitute it into the main equation. It comes out as.

Equation 1.1

And then expand.

Equation 1.1

We then have to collect coefficients, we go up to the first co efficient of Îµ due to the much extra working for Îµ1 isn't needed

Equation 1.1

Therefore there are two equations to come out of this

Equation 1.1

Equation 1.1

Next find the equation of x1

The equation for this part is

Let x0=1.

Equation 1.1

Equation 1.1

Making the approximate solution for x to be

Equation 1.1

Let x0=-1

Equation 1.1

Equation 1.1

Solving this gives,

Equation 1.1

The asymptotic expansion for the root of x=3/2 is therefore.

Equation 1.1

Let x0=2

Equation 1.1

Equation 1.1

Equation 1.1

Equation 1.1

This gives the two different approximations dependant on which root the user is looking at although still the bigger the value of epsilon the value still does diverge. Because the boundary is small to begin with the approximate is extremely accurate for an approximation.

## 2.5.0 Convergence or Divergence and Error-Cubic Equation

## 2.5.1 Convergence or Divergence and Error-Cubic

Looking into the convergence of a cubic equations with the above equation for x=-1

As can be seen above, the error fluctuates, by adding on values of 0.1 to Îµ at each step. The error is not linear like the quadratic. This could be because because it reaches other solutions to the equation at that point.

However this can be compared when raising the power of the final term of the cubic again to the power of ten.

As can be seen by raising the value of the error makes the curve a lot smoother, but this is over a very high error range.

## 2.6.0 Higher Order Equations

For equations which have powers higher than the quadratic and the cubic, the method is genuinely the same. However being a higher order it also means that the highest power would multiply the solutions which is not known.

To begin with the equation would be standard as

Equation 1.1

(Nayfeh, 1993, p. 43)

From the above equation if Îµâ†’0 becomes genuinely recognisable, which makes the equation simpler to devise, simplify and solve via perturbation methods.

Equation 1.1

Resuming to the perturbation method and using the predicted value of x with respect to Îµ, as shown in previous sections.

By submitting this into the above equation pre Îµâ†’0. The equation becomes.

Equation 1.1

Expanding and equating coefficients gives

Equation 1.1

NEED TO FINISH

## 2.7.0 Transcendental Equations

Transcendental equations are equations which can't be solved, as they don't have real solutions.

## Example: Transcendental Equation

For example the equation

Equation 1.1

It is one which transcends and therefore cannot be displayed in terms of algebra.

A starting point is for xâ†’âˆž gives means that.

Equation 1.1

Once perturbed the above equation could become

Equation 1.1

This simplifies down to

Equation 1.1

By using the standard application of the asymptotic expansion of

Equation 1.1

Remembering the Taylor theorem for the exponential is

Equation 1.1

CITATION efu12 \l 2057 (efunda, 2012)

Substituting this into the main equation it becomes, by using the first couple of terms it demonstrates the perturbation method, without going into too complicated expansions.

Equation 1.1

This can give way for the equation to be manipulated

Equation 1.1

We interpret from the above equation that as long as Îµ â‰ª1 then x must be greater than one, however this also means that the root must be quite big to satisfy the values of when Îµâ‰ª1.

This is the reason to find the asymptotic expansion, however as there is an exponential function in the formula we must use the natural logarithm.

Equation 1.1

Equation 1.1

As xâ‰«1 as Îµâ‰ª1 then we must also assume that ln(x) is quite small, which must mean that x is near the root of 2ln(1/Îµ).

So the approximation comes in the form

Equation 1.1

This does however also give rise to an expansion of an iterative case.

Equation 1.1

(Malham, n.d.)

The idea with transandental equations in this instance in to see where the equation equals itself. Solving the above equation when , gives

Equation 1.1

Equation 1.1

Which implies the value of x depends on the value of the amount the graph has changed, as above the equation changed by 0.01.

## Summary

As shown above, the use of Perturbation Methods on the simple straight forward equations is a very general model. Cases like those above clearly show the flexibility that a perturbation method gives for certain problems.

Although the solution may not be clean cut and there may always be a certain margin of error, it is important to understand that under due perturbed calculations, the method is generally quick and very accurate.

## 3.0.0 Integration of Perturbed Equations

Some differential equations cannot be shown in terms of a simple solution, as integration is a form of calculus, a generic solution for an area of a graph which can be ever changing is a tricky idea to come up against. However this is not meaning that it is impossible to solve.

The main use for the asymptotic expansion is to see if the series is convergent, therefore it is most useful to use the likes of the Taylor series to expand for generic functions and then find if it converges for all values, if it does then for whatever value which the equation is perturbed by it can be found and solved.

## 3.1.0 Normal Integration

## Example: Normal Integration

Given the following equation we can manipulate to solve and find an asymptotic expansion

Equation 1.1

As the Taylor series has an expansion for e in the form of

Equation 1.1

Of course an important piece to remember when dealing with series is whether they converge or not. This is more so important in this area of mathematics to know that for all or most values of m so the solution is precise.

Therefore we must use the ratio test to compare and investigate.

Equation 1.1

And hence the series does indeed converge for all values of m.

However by substituting the expansion into the main equation and integrating term by term we come out with the equation.

Equation 1.1

By expanding upon this we get the asymptotic expansion:

Equation 1.1

From the above expansion the series seems to be converging. On some value, however it is hard to find what value since m has not been determined, however from a far distance the values do converge.

Methods which help with integration in perturbation methods are both Taylor series, as seen before and Laplace transforms, which uses the exponential function in its integration. The mathematics involved in this is quite complicated.

For this we look to an example to run through.

Looking into the expansion of the simple function the following equation and expanding it into a Taylor series.

Equation 1.1

This becomes the expansion.

Equation 1.1

Using the ratio test.

Equation 1.1

Therefore the system converges to 0, which would be expected from a function which is reflected on the line of y=0.

## 3.2.0 Integration by Parts

Of course there is the matter of equations which are better solved by parts. In certain circumstances the integral can make its own asymptotic expansion.

## Example: Integration by Parts

The formula for integration by parts is.

Equation 1.1

Given the equation,

Equation 1.1

Which in this case the formula would be

Equation 1.1

We can split the equation into differing parts in order to use in the above integration by parts formula.

Equation 1.1

By differentiating u and integrating dv/ds, we can input the variables into the formula.

Equation 1.1

Hence

Equation 1.1

By simplifying makes the following

Equation 1.1

Following the same pattern it becomes

Equation 1.1

Equation 1.1

Equation 1.1

Again repeating the procedure

Equation 1.1

By substituting the new vales discovers the new asymptotic expansion, by refreshment the new terms of the expansion are.

Equation 1.1

Due to the continuing negative multiplication from each expansion, the asymptotic expansion elevates to,

Equation 1.1

By continuing this expansion it becomes clear that the expansion has a general form of

Equation 1.1

To find if this converges or not the ratio test must be used

Therefore the series diverges to minus infinity as shown in the above ratio test.

## 3.4.0 Laplace Transforms

One interesting part of integration is the matter of Laplace Transforms.

## Watsons Lemma

Watsons Lemma is an important concept in the theory of asymptotic integrals. The theory states that by restricting the integral of some function, a series can be built to make an asymptotic expansion to make a series by integrating each term in a binomial expansion of f(t).

These types of equations generally come in the form of

Equation 1.1

(Nayfeh, 1993)

Looking at a different example. We can examine the use of a Laplace transforms to get an asymptotic expansion.

Equation 1.1

In order to construct and asymptotic series, we increase the upper limit to âˆž. The binomial expansion of the denominator is

Equation 1.1

Of course being a series we must analyse the convergence of such a series, this is done by using the ratio test.

Equation 1.1

This shows that the series will only expand for values of 2|t|<1. Of course to integrate the whole function from 0 to 20 is a very difficult problem to tackle since by integrating each term in the above expansion term my term will only work up to the value of t=1/2.

However this isn't to say that it can't be solved, we can split the boundaries of integration up to Î³.

Equation 1.1

However as can be seen for the value of 20 in the second part of the above integral, it will produce some very small numbers which (depending on the needed degree of accuracy) will be useless in the calculations and will take up unnecessary calculation time.

Therefore we can use the first part of the integral. Using the expansion found earlier for the denominator we can make the new integral

Equation 1.1

By taking out the integer terms and thereby simplifying the integral this can transform to

Equation 1.1

Of course the use of multiplying to symbolic factors in integration can be quite tricky and therefore a substitution is needed.

This comes in the form of

Equation 1.1

Therefore the equation becomes

Equation 1.1

By integrating the above equation we use the integration by parts formula

Equation 1.1

Doing this term by term it becomes

Equation 1.1

Equation 1.1

Therefore this becomes

Equation 1.1

Going further again by integrating using the integration by parts formula,

Equation 1.1

With the different parts being,

Equation 1.1

Equation 1.1

This makes the equation

Equation 1.1

Again using the integration by parts formula,

Equation 1.1

Equation 1.1

Equation 1.1

Equation 1.1

Now taking them all into a bigger formula we see that it becomes

Equation 1.1

By expanding the brackets one by one we see that

Equation 1.1

Which by substituting into the main expansion we can get the equation

Equation 1.1

Yet this can be simplified further to get a better view on the integral

Equation 1.1

It is clear to see that if a does go to zero very quickly then the only term remaining will be the n factorial. This would make the final equation now

Equation 1.1

Substitute this into the series found previously and find if it converges or not

Equation 1.1

From this it can be seen that it is a series and therefore as usual find if it converges or not thus a ratio test should be used.

Equation 1.1

As it seems the above series diverges it is not appropriate to use an equal sign as an asymptotic series can converge. Therefore the"~" sign should be used.

Equation 1.1

This is the above sum. This goes to minus infinity and therefore diverges.

## 3.5.0 Stationary Phase

thanks is gfiven to "introdruction to perturbation techniques" by Ali hassan Nayfeh and "perturbation Methods" by E.J hinch for such in depth information in this section.

The use of a stationary phase and calculating one is used by using the complex plane and the Fourier Integral. The stationary phase approximation is commonly used in areas of oscillatory integrals. It relies on the ever rapidly changing characteristics which an oscillation has. The use of stationary phases comes with the other use of complex numbers being expressed in polar form hence rendering an equation for the fourier integral.

Equation 1.1

The use of a stationary phase is commonly found by using the steepest descent however such working can be quite time consuming, but accuracy is more of a factor if both methods are used.

## 3.6.0 Steepest Descent

A method of steepest descent and stationary Phase uses the Laplace transform where an integral in the complex plane can be deformed to pass near a stationary point. It uses the complex plane for the saddle point in place of the Laplace method as shown above which was used in the real plane. The ideal situation which commonly arises is the use of Watson's lemma, which was previously found using the Laplace integration method.

Generally this comes in the form of the formula

Equation 1.1

(Nayfeh, 1993)

The use of the steepest descent deforms a contour into one on a complex plane. It finds where a path is particularly constant which and therefore is genuinely seen to be a steepest descent as it is seen to descend most rapidly in that one area.

The use of a steepest descent depends on the real values on the complex plane. Where the real value is real and positive and at its largest, through this we can begin to see where the gradient begins and ends. As said very well in Perturbation Methods "A contour to be integrated in the complex plane must require however for Re(f)<0 in order for an infinite integral to converge" (Hinch, 1995).

It may seem that on the complex plane the imaginary values are somewhat ignored and not necessary, however this is not true. When z (the complex number) is very large the imaginary part fluctuates and oscillates very quickly, by using different values for the imaginary part to make sure the contour stays constant and therefore gives a good estimate for the steepest descent. If the imaginary number is too high with the varying Oscillations such an effect can cancel out any chance of getting a good integration of the area and therefore can miss out such stationary points which are sought after.

E.J Hinch in Perturbation Methods mentions that It must be noted in actual use the method of steepest descent is not really used since such integration over an area will converge on a local saddle point and give a good stationary phase. Although the major consideration in such a study is to find the highest saddle point can be quite difficult it is genuinely the one in which the contour must pass as it continues through various fixed points. Some higher saddles are in places in which the contour does not pass due to the limits from the integral and so are not found. (Hinch, 1995)

## 4.0.0 ODEs

The use of Ordinary Differential Equations is used in a variety of scientific areas. It is an equation of one independent variable but a function containing various derivatives of that variable.

The uses of Perturbation methods have been specifically implicated to solve both linear and non-linear models due to significant scientific interest (Saravi, et al., 2013).

## 4.1.0 Non Oscillating ODE

By considering such a problem, it is important to incorporate the idea of an asymptotic series alongside a differential equation; of course with one which is perturbed the solution is ever change with each higher margin of error from the original problem.

The background for this example came from the website: http://maji.utsi.edu/courses/05_540_perturbations_1/enotes_p3_RegularODEs.pdf. And Credit is towards these notes for such helpful examples.

For example by looking into the equation (Tzitzouris, 2003).

Equation 1.1

We can begin to look into the solutions to this, up to and including the first order asymptotic expansion.

Equation 1.1

Therefore via substitution the above ordinary differential equation becomes.

Equation 1.1

So by separating the different parts we see the two equations coming forward

Equation 1.1

Equation 1.1

By solving these equations we find that.

Equation 1.1

And this allows us to solve the second equation which is

Equation 1.1

By putting this all together can can formulise an asymptotic expansion up to first order of the form.

Equation 1.1

## 4.2.0 Oscillating Ordinary Differential equation

## 4.2.1 Straightforward expansion

Some differential equations produce solutions which automatically oscillate. Using the example of the Duffing equation

Equation 1.1

With initial conditions

By using the following asymptotic expansion for x,

Equation 1.1

Substituting this into the above differential equation makes.

Equation 1.1

Equation 1.1

Expanding and collecting terms makes the equations

Equation 1.1

solving each part of the equations with initial conditions.

Equation 1.1

This makes for the first expansion

Equation 1.1

introducing the value of Îµ to be 0.01, the solution for the perturbed equation is

Equation 1.1

Taking x0 away from x we can show a graph calculating error for escalating values of t.

Equation 1.1

Figure Showing error of Duffing equation with a one term expansion

Although the error isn't that big it can be minimised by solving the second part of the collected equation and adding them together and taking away from the initial value of x makes

Equation 1.1

The graph below shows how the graph has changed, with the oscillation in blue being the first expansion and the oscillation in red being the second expansion included, for error.

Figure Showing error of Duffing equation with a two term expansion

Unfortunately as can be seen from above for the first expansion in comparison for large t the error can become extremely large and quite diverse. The "tsin(t)" term in the above equation is a secular term which grows without bound which is clearly shown above, in cases such as these occur commonly in methods involving straight forward perturbation methods. The reason for this is with t getting bigger the approximations grows in time and causes major error.

It is now more applicable to use the Poincare Lindstedt method, a method which is more accurate for all the time (i.e for when t gets bigger and bigger), it eliminates t from the equation making sure that the approximation is scales and does not increase as the variable t gets bigger.

## 4.2.2 Lindstedt Poincare Expansion

This method as described in "Introduction to Perturbation Methods" by MH Holmes was found by Lindstedt who found that the error was majorly contributed by the secular term in the second term of the expansion "To remedy the situation, he expanded the independt variable with the intention of making the approximation uniformly valid by removing the secular term" (Holmes, 1995).

To begin this method we must evaluate a different variable for t and to transform it. This is what a strained coordinate is. The reason for introducing such a method is because any expansion which didn't include a strained coordinate is always going to fail over an increased period of t getting larger.

Again solving for the above equation

Equation 1.1

With variables which are secular defined particularly as

Equation 1.1

(Wikipedia, 2012)

Then the equation, which in line with the number of the differential is the power of Ï‰ becomes.

Equation 1.1

This equation will become,

Equation 1.1

Equation 1.1

By collecting terms of Îµ0, Îµ1 we get

Equation 1.1

Equation 1.1

Solving the above with the initial conditions given before

Equation 1.1

This gives O(Îµ) to be

Equation 1.1

This gives the solution

Equation 1.1

By selecting Ï‰ to be -3/8 we get the answer. The reason for this is because we do not want Ï„ to make the term secular (i.e) growing with each new stage growing from 0.

Equation 1.1

Equation 1.1

This makes the asymptotic expansion of x to be

Equation 1.1

With the variable Ï„ expansion calculated to therefore be,

Equation 1.1

This gives the graph

As can be seen from the graph that the error is a lot smaller than the usual perturbation expansion. The error seems to be decreasing as the expansion goes on which is strange given that an expansion should decrease as time goes on.

The Lindstedt Poincare expansion does have its weaknesses, it cannot operate for all ordinary differential equations and if the ordinary differential equation does not operate in a repetitive manor it will fail (Lynch, 2010).

## 4.3.0 Different types of Oscillations

As previously shown with the doffing equation there are many types of the approximations for perturbed oscillated equations. Although the method is generally the same it is important to show them too with the error.

## 4.3.1 Linear Oscillators

A Linear Oscillator is an oscillator which has damping. We can apply both the straightforward and Lindstedt Approximations to Linear Oscillator equation.

The equation put simply is

Equation 1.1

Of course this maths works in perfect conditions and hence why such a simple equation.

By setting the differential coefficients to be

Equation 1.1

Equation 1.1

Introducing a perturbed variable Îµ again.

Equation 1.1

Equation 1.1

Substituting the expansion

Equation 1.1

Equation 1.1

Expanding the above makes

Equation 1.1

Collecting terms makes

Equation 1.1

Solving the above like the Duffing equation.

Equation 1.1

Solving the above gives

Equation 1.1

For O(Îµ)

Equation 1.1

Equation 1.1

Therefore the expansion is

Equation 1.1

Although this method is the same for the doffing equation it is important to understand that where the approximation originated from is one of a linear oscillator and therefore the approximation. However being a straightforward expansion the error will expand very quickly.

It is now applicable to use the Lindsted-Poincare method to find an approximation by using the same expansions as before in the Duffing section

Equation 1.1

Equation 1.1

Equation 1.1

The equation which is to be solved is

Equation 1.1

Equation 1.1

Equation 1.1

Collecting up to the Îµ terms gives the following set of equations.

Equation 1.1

Equation 1.1

Solving the equation above gives

Equation 1.1

Unfortunately the solution to find is a tricky matter making the equation equal itself by separating x0 and x1 we get the equation.

Equation 1.1

Expanding upon this the equations becomes

Equation 1.1

To solve such a problem we would have to make

Equation 1.1

However this would leave the equation to be

Equation 1.1

This would leave the equation to have x1 to be

Equation 1.1

This can to be as the expansion must have second term to continue. The continuation of this example leaves the some heavy questions left of the Poincare expansion. It seems it fails a trivial solution and does have its limits.

By solving directly using Matlab the solution becomes

Equation 1.1

Which would leave the value of Ï‰ ever changing and this is not desirable.

## Conclusion

As seen from the above. The solutions of Ordinary differential equations can be found using Perturbation Methods however. It does struggle at some tougher problems and leads to some very heavy complex mathematics. However saying that the use of a straight forward expansion for some small values of Îµ it gives a good approximation and the Poincare Lindstedt example worked very well on the Duffing equation, however when put to the test on a Linear Oscillator it failed.

## 5.0.0 Applications of Perturbation Methods

Being that Perturbation Methods is a form of approximations it stretches into many areas of mathematics.

## 5.1.0 The Sciences

The uses of perturbation methods were born in this area and that is where they majorly lie. They range from uses in physics to astronomy.

For example at the moment in the Large Hadron Collider perturbation methods are being used to calculate the interactions between two particles. As stated very well on The Guardian website "you can ignore the contributions which involve more than say four particles - they are just a small perturbation on the main result, because they are multiplied by 0.1 x 0.1 x 0.1 x 0.1 x 0.1 = 0.00001. They don't change the result much. This is "perturbation theory". It is accurate if the coupling is small, that is if the force is weak." (Butterworth, 2011), it is clear that the use of Perturbation Methods is pure in finding the Higgs Boson and stretches into the plane of pure mathematical approximations in physics.

Perturbation methods a