0115 966 7955 Today's Opening Times 10:00 - 20:00 (BST)

Numerical Differential Equation Analysis Package

Disclaimer: This dissertation has been submitted by a student. This is not an example of the work written by our professional dissertation writers. You can view samples of our professional work here.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.

The Numerical Differential Equation Analysis package combines functionality for analyzing differential equations using Butcher trees, Gaussian quadrature, and Newton-Cotes quadrature.

Butcher

Runge-Kutta methods are useful for numerically solving certain types of ordinary differential equations. Deriving high-order Runge-Kutta methods is no easy task, however. There are several reasons for this. The first difficulty is in finding the so-called order conditions. These are nonlinear equations in the coefficients for the method that must be satisfied to make the error in the method of order O (hn) for some integer n where h is the step size. The second difficulty is in solving these equations. Besides being nonlinear, there is generally no unique solution, and many heuristics and simplifying assumptions are usually made. Finally, there is the problem of combinatorial explosion. For a twelfth-order method there are 7813 order conditions!

This package performs the first task: finding the order conditions that must be satisfied. The result is expressed in terms of unknown coefficients aij, bj, and ci. The s-stage Runge-Kutta method to advance from x to x+h is then

where

Sums of the elements in the rows of the matrix [aij] occur repeatedly in the conditions imposed on aij and bj. In recognition of this and as a notational convenience it is usual to introduce the coefficients ci and the definition

This definition is referred to as the row-sum condition and is the first in a sequence of row-simplifying conditions.

If aij=0 for all i≤j the method is explicit; that is, each of the Yi (x+h) is defined in terms of previously computed values. If the matrix [aij] is not strictly lower triangular, the method is implicit and requires the solution of a (generally nonlinear) system of equations for each timestep. A diagonally implicit method has aij=0 for all i<j.

There are several ways to express the order conditions. If the number of stages s is specified as a positive integer, the order conditions are expressed in terms of sums of explicit terms. If the number of stages is specified as a symbol, the order conditions will involve symbolic sums. If the number of stages is not specified at all, the order conditions will be expressed in stage-independent tensor notation. In addition to the matrix a and the vectors b and c, this notation involves the vector e, which is composed of all ones. This notation has two distinct advantages: it is independent of the number of stages s and it is independent of the particular Runge-Kutta method.

For further details of the theory see the references.

ai,j

the coefficient of f(Yj(x)) in the formula for Yi(x) of the method

bj

the coefficient of f(Yj(x)) in the formula for Y(x) of the method

ci

a notational convenience for aij

e

a notational convenience for the vector (1, 1, 1, ...)

Notation used by functions for Butcher.

RungeKuttaOrderConditions[p,s]

give a list of the order conditions that any s-stage Runge-Kutta method of order p must satisfy

ButcherPrincipalError[p,s]

give a list of the order p+1 terms appearing in the Taylor series expansion of the error for an order-p, s-stage Runge-Kutta method

RungeKuttaOrderConditions[p], ButcherPrincipalError[p]

give the result in stage-independent tensor notation

Functions associated with the order conditions of Runge-Kutta methods.

ButcherRowSum

specify whether the row-sum conditions for the ci should be explicitly included in the list of order conditions

ButcherSimplify

specify whether to apply Butcher's row and column simplifying assumptions

Some options for RungeKuttaOrderConditions.

This gives the number of order conditions for each order up through order 10. Notice the combinatorial explosion.

In[2]:=

 

Out[2]=

 

This gives the order conditions that must be satisfied by any first-order, 3-stage Runge-Kutta method, explicitly including the row-sum conditions.

In[3]:=

 

Out[3]=

 

These are the order conditions that must be satisfied by any second-order, 3-stage Runge-Kutta method. Here the row-sum conditions are not included.

In[4]:=

 

Out[4]=

 

It should be noted that the sums involved on the left-hand sides of the order conditions will be left in symbolic form and not expanded if the number of stages is left as a symbolic argument. This will greatly simplify the results for high-order, many-stage methods. An even more compact form results if you do not specify the number of stages at all and the answer is given in tensor form.

These are the order conditions that must be satisfied by any second-order, s-stage method.

In[5]:=

 

Out[5]=

 

Replacing s by 3 gives the same result asRungeKuttaOrderConditions.

In[6]:=

 

Out[6]=

 

These are the order conditions that must be satisfied by any second-order method. This uses tensor notation. The vector e is a vector of ones whose length is the number of stages.

In[7]:=

 

Out[7]=

 

The tensor notation can likewise be expanded to give the conditions in full.

In[8]:=

 

Out[8]=

 

These are the principal error coefficients for any third-order method.

In[9]:=

 

Out[9]=

 

This is a bound on the local error of any third-order method in the limit as h approaches 0, normalized to eliminate the effects of the ODE.

In[10]:=

 

Out[10]=

 

Here are the order conditions that must be satisfied by any fourth-order, 1-stage Runge-Kutta method. Note that there is no possible way for these order conditions to be satisfied; there need to be more stages (the second argument must be larger) for there to be sufficiently many unknowns to satisfy all of the conditions.

In[11]:=

 

Out[11]=

 

 

RungeKuttaMethod

specify the type of Runge-Kutta method for which order conditions are being sought

Explicit

a setting for the option RungeKuttaMethod specifying that the order conditions are to be for an explicit Runge-Kutta method

DiagonallyImplicit

a setting for the option RungeKuttaMethod specifying that the order conditions are to be for a diagonally implicit Runge-Kutta method

Implicit

a setting for the option RungeKuttaMethod specifying that the order conditions are to be for an implicit Runge-Kutta method

$RungeKuttaMethod

a global variable whose value can be set to Explicit, DiagonallyImplicit, or Implicit

Controlling the type of Runge-Kutta method in RungeKuttaOrderConditions and related functions.

RungeKuttaOrderConditions and certain related functions have the option RungeKuttaMethod with default setting $RungeKuttaMethod. Normally you will want to determine the Runge-Kutta method being considered by setting $RungeKuttaMethod to one of Implicit, DiagonallyImplicit, and Explicit, but you can specify an option setting or even change the default for an individual function.

These are the order conditions that must be satisfied by any second-order, 3-stage diagonally implicit Runge-Kutta method.

In[12]:=

 

Out[12]=

 

An alternative (but less efficient) way to get a diagonally implicit method is to force a to be lower triangular by replacing upper-triangular elements with 0.

In[13]:=

 

Out[13]=

 

These are the order conditions that must be satisfied by any third-order, 2-stage explicit Runge-Kutta method. The contradiction in the order conditions indicates that no such method is possible, a result which holds for any explicit Runge-Kutta method when the number of stages is less than the order.

In[14]:=

 

Out[14]=

 

 

ButcherColumnConditions[p,s]

give the column simplifying conditions up to and including order p for s stages

ButcherRowConditions[p,s]

give the row simplifying conditions up to and including order p for s stages

ButcherQuadratureConditions[p,s]

give the quadrature conditions up to and including order p for s stages

ButcherColumnConditions[p], ButcherRowConditions[p], etc.

give the result in stage-independent tensor notation

More functions associated with the order conditions of Runge-Kutta methods.

Butcher showed that the number and complexity of the order conditions can be reduced considerably at high orders by the adoption of so-called simplifying assumptions. For example, this reduction can be accomplished by adopting sufficient row and column simplifying assumptions and quadrature-type order conditions. The option ButcherSimplify in RungeKuttaOrderConditions can be used to determine these automatically.

These are the column simplifying conditions up to order 4.

In[15]:=

 

Out[15]=

 

These are the row simplifying conditions up to order 4.

In[16]:=

 

Out[16]=

 

These are the quadrature conditions up to order 4.

In[17]:=

 

Out[17]=

 

Trees are fundamental objects in Butcher's formalism. They yield both the derivative in a power series expansion of a Runge-Kutta method and the related order constraint on the coefficients. This package provides a number of functions related to Butcher trees.

f

the elementary symbol used in the representation of Butcher trees

ButcherTrees[p]

give a list, partitioned by order, of the trees for any Runge-Kutta method of order p

ButcherTreeSimplify[p,Eta,Xi]

give the set of trees through order p that are not reduced by Butcher's simplifying assumptions, assuming that the quadrature conditions through order p, the row simplifying conditions through order Eta, and the column simplifying conditions through order Xiall hold. The result is grouped by order, starting with the first nonvanishing trees

ButcherTreeCount[p]

give a list of the number of trees through order p

ButcherTreeQ[tree]

give True if the tree or list of trees tree is valid functional syntax, and False otherwise

Constructing and enumerating Butcher trees.

This gives the trees that are needed for any third-order method. The trees are represented in a functional form in terms of the elementary symbol f.

In[18]:=

 

Out[18]=

 

This tests the validity of the syntax of two trees. Butcher trees must be constructed using multiplication, exponentiation or application of the function f.

In[19]:=

 

Out[19]=

 

This evaluates the number of trees at each order through order 10. The result is equivalent to Out[2] but the calculation is much more efficient since it does not actually involve constructing order conditions or trees.

In[20]:=

 

Out[20]=

 

The previous result can be used to calculate the total number of trees required at each order through order10.

In[21]:=

 

Out[21]=

 

The number of constraints for a method using row and column simplifying assumptions depends upon the number of stages. ButcherTreeSimplify gives the Butcher trees that are not reduced assuming that these assumptions hold.

This gives the additional trees that are necessary for a fourth-order method assuming that the quadrature conditions through order 4 and the row and column simplifying assumptions of order 1 hold. The result is a single tree of order 4 (which corresponds to a single fourth-order condition).

In[22]:=

 

Out[22]=

 

It is often useful to be able to visualize a tree or forest of trees graphically. For example, depicting trees yields insight, which can in turn be used to aid in the construction of Runge-Kutta methods.

ButcherPlot[tree]

give a plot of the tree tree

ButcherPlot[{tree1,tree2,...}]

give an array of plots of the trees in the forest {tree1, tree2,...}

Drawing Butcher trees.

ButcherPlotColumns

specify the number of columns in the GraphicsGrid plot of a list of trees

ButcherPlotLabel

specify a list of plot labels to be used to label the nodes of the plot

ButcherPlotNodeSize

specify a scaling factor for the nodes of the trees in the plot

ButcherPlotRootSize

specify a scaling factor for the highlighting of the root of each tree in the plot; a zero value does not highlight roots

Options to ButcherPlot.

This plots and labels the trees through order 4.

In[23]:=

 

Out[23]=

 

In addition to generating and drawing Butcher trees, many functions are provided for measuring and manipulating them. For a complete description of the importance of these functions, see Butcher.

ButcherHeight[tree]

give the height of the tree tree

ButcherWidth[tree]

give the width of the tree tree

ButcherOrder[tree]

give the order, or number of vertices, of the tree tree

ButcherAlpha[tree]

give the number of ways of labeling the vertices of the tree tree with a totally ordered set of labels such that if (m, n) is an edge, then m<n

ButcherBeta[tree]

give the number of ways of labeling the tree tree with ButcherOrder[tree]-1 distinct labels such that the root is not labeled, but every other vertex is labeled

ButcherBeta[n,tree]

give the number of ways of labeling n of the vertices of the tree with n distinct labels such that every leaf is labeled and the root is not labeled

ButcherBetaBar[tree]

give the number of ways of labeling the tree tree with ButcherOrder[tree] distinct labels such that every node, including the root, is labeled

ButcherBetaBar[n,tree]

give the number of ways of labeling n of the vertices of the tree with n distinct labels such that every leaf is labeled

ButcherGamma[tree]

give the density of the tree tree; the reciprocal of the density is the right-hand side of the order condition imposed by tree

ButcherPhi[tree,s]

give the weight of the tree tree; the weight CapitalPhi(tree) is the left-hand side of the order condition imposed by tree

ButcherPhi[tree]

give CapitalPhi(tree) using tensor notation

ButcherSigma[tree]

give the order of the symmetry group of isomorphisms of the tree tree with itself

Other functions associated with Butcher trees.

This gives the order of the tree f[f[f[f] f^2]].

In[24]:=

 

Out[24]=

 

This gives the density of the tree f[f[f[f] f^2]].

In[25]:=

 

Out[25]=

 

This gives the elementary weight function imposed by f[f[f[f] f^2]] for an s-stage method.

In[26]:=

 

Out[26]=

 

The subscript notation is a formatting device and the subscripts are really just the indexed variable NumericalDifferentialEquationAnalysis`Private`$i.

In[27]:=

 

Out[27]//FullForm=

   
   

It is also possible to obtain solutions to the order conditions using Solve and related functions. Many issues related to the construction Runge-Kutta methods using this package can be found in Sofroniou. The article also contains details concerning algorithms used in Butcher.m and discusses applications.

Gaussian Quadrature

As one of its methods, the Mathematica function NIntegrate uses a fairly sophisticated Gauss-Kronrod-based algorithm. The Gaussian quadrature functionality provided in Numerical Differential Equation Analysis allows you to easily study some of the theory behind ordinary Gaussian quadrature which is a little less sophisticated.

The basic idea behind Gaussian quadrature is to approximate the value if an integral as a linear combination of values of the integrand evaluated at specific points:

Since there are 2n free parameters to be chosen (both the abscissas xi and the weights wi) and since both integration and the sum are linear operations, you can expect to be able to make the formula correct for all polynomials of degree less than about 2n. In addition to knowing what the optimal abscissas and weights are, it is often desirable to know how large the error in the approximation will be. This package allows you to answer both of these questions.

GaussianQuadratureWeights[n,a,b]

give a list of the pairs (xi, wi) to machine precision for quadrature on the interval a to b

GaussianQuadratureError[n,f,a,b]

 

give the error to machine precision

GaussianQuadratureWeights[n,a,b,prec]

 

give a list of the pairs (xi, wi) to precision prec

GaussianQuadratureError[n,f,a,b,prec]

 

give the error to precision prec

   

Finding formulas for Gaussian quadrature.

This gives the abscissas and weights for the five-point Gaussian quadrature formula on the interval (-3, 7).

In[2]:=

 

Out[2]=

 

Here is the error in that formula. Unfortunately it involves the tenth derivative of f at an unknown point so you don't really know what the error itself is.

In[3]:=

 

Out[3]=

 

You can see that the error decreases rapidly with the length of the interval.

In[4]:=

 

Out[4]=

 

Newton-Cotes

As one of its methods, the Mathematica function NIntegrate uses a fairly sophisticated Gauss-Kronrod based algorithm. Other types of quadrature formulas exist, each with their own advantages. For example, Gaussian quadrature uses values of the integrand at oddly spaced abscissas. If you want to integrate a function presented in tabular form at equally spaced abscissas, it won't work very well. An alternative is to use Newton-Cotes quadrature.

The basic idea behind Newton-Cotes quadrature is to approximate the value of an integral as a linear combination of values of the integrand evaluated at equally spaced points:

In addition, there is the question of whether or not to include the end points in the sum. If they are included, the quadrature formula is referred to as a closed formula. If not, it is an open formula. If the formula is open there is some ambiguity as to where the first abscissa is to be placed. The open formulas given in this package have the first abscissa one half step from the lower end point.

Since there are n free parameters to be chosen (the weights) and since both integration and the sum are linear operations, you can expect to be able to make the formula correct for all polynomials of degree less than about n. In addition to knowing what the weights are, it is often desirable to know how large the error in the approximation will be. This package allows you to answer both of these questions.

NewtonCotesWeights[n,a,b]

give a list of the n pairs (xi, wi) for quadrature on the interval a to b

NewtonCotesError[n,f,a,b]

give the error in the formula

Finding formulas for Newton-Cotes quadrature.

option name

default value

 

QuadratureType

Closed

the type of quadrature, Open or Closed

Option for NewtonCotesWeights and NewtonCotesError.

Here are the abscissas and weights for the five-point closed Newton-Cotes quadrature formula on the interval (-3, 7).

In[2]:=

 

Out[2]=

 

Here is the error in that formula. Unfortunately it involves the sixth derivative of f at an unknown point so you don't really know what the error itself is.

In[3]:=

 

Out[3]=

 

You can see that the error decreases rapidly with the length of the interval.

In[4]:=

 

Out[4]=

 

This gives the abscissas and weights for the five-point open Newton-Cotes quadrature formula on the interval (-3, 7).

In[5]:=

 

Out[5]=

 

Here is the error in that formula.

In[6]:=

 

Out[6]=

 

Runge-Kutta Methods

From Wikipedia, The Free Encyclopedia

Jump to: navigation, search

In numerical analysis, the Runge-Kutta methods (German pronunciation:[ˌʀʊŋəˈkʊta]) are an important family of implicit and explicit iterative methods for the approximation of solutions of ordinary differential equations. These techniques were developed around 1900 by the German mathematicians C. Runge and M.W. Kutta.

See the article on numerical ordinary differential equations for more background and other methods. See also List of Runge-Kutta methods.

Contents

1 The common fourth-order Runge-Kutta method

2 Explicit Runge-Kutta methods

o 2.1 Examples

3 Usage

4 Adaptive Runge-Kutta methods

5 Implicit Runge-Kutta methods

6 References

7 External links

The Common Fourth-Order Runge-Kutta Method

One member of the family of Runge-Kutta methods is so commonly used that it is often referred to as "RK4", "classical Runge-Kutta method" or simply as "the Runge-Kutta method".

Let an initial value problem be specified as follows.

Then, the RK4 method for this problem is given by the following equations:

where yn + 1 is the RK4 approximation of y(tn + 1), and

Thus, the next value (yn + 1) is determined by the present value (yn) plus the product of the size of the interval (h) and an estimated slope. The slope is a weighted average of slopes:

k1 is the slope at the beginning of the interval;

k2 is the slope at the midpoint of the interval, using slope k1 to determine the value of y at the point tn + h / 2 using Euler's method;

k3 is again the slope at the midpoint, but now using the slope k2 to determine the y-value;

k4 is the slope at the end of the interval, with its y-value determined using k3.

In averaging the four slopes, greater weight is given to the slopes at the midpoint:

The RK4 method is a fourth-order method[needs reference], meaning that the error per step is on the order of h5, while the total accumulated error has order h4.

Note that the above formulae are valid for both scalar- and vector-valued functions (i.e., y can be a vector and f an operator). For example one can integrate Schrödinger's equation using the Hamiltonian operator as function f.

Explicit Runge-Kutta Methods

The family of explicit Runge-Kutta methods is a generalization of the RK4 method mentioned above. It is given by

where

(Note: the above equations have different but equivalent definitions in different texts).

To specify a particular method, one needs to provide the integer s (the number of stages), and the coefficients aij (for 1 ≤ j < i ≤ s), bi (for i = 1, 2, ..., s) and ci (for i = 2, 3, ..., s). These data are usually arranged in a mnemonic device, known as a Butcher tableau (after John C. Butcher):

 

0

         
 

c2

a21

       
 

c3

a31

a32

     
             
 

cs

as1

as2

 

as,s − 1

 
   

b1

b2

 

bs − 1

bs

             

The Runge-Kutta method is consistent if

There are also accompanying requirements if we require the method to have a certain order p, meaning that the truncation error is O(hp+1). These can be derived from the definition of the truncation error itself. For example, a 2-stage method has order 2 if b1 + b2 = 1, b2c2 = 1/2, and b2a21 = 1/2.

Examples

The RK4 method falls in this framework. Its tableau is:

 

0

       
 

1/2

1/2

     
 

1/2

0

1/2

   
 

1

0

0

1

 
   

1/6

1/3

1/3

1/6

           

However, the simplest Runge-Kutta method is the (forward) Euler method, given by the formula yn + 1 = yn + hf(tn,yn). This is the only consistent explicit Runge-Kutta method with one stage. The corresponding tableau is:

 

0

 
   

1

An example of a second-order method with two stages is provided by the midpoint method

The corresponding tableau is:

 

0

   
 

1/2

1/2

 
   

0

1

       

Note that this 'midpoint' method is not the optimal RK2 method. An alternative is provided by Heun's method, where the 1/2's in the tableau above are replaced by 1's and the b's row is [1/2, 1/2]. If one wants to minimize the truncation error, the method below should be used (Atkinson p.423). Other important methods are Fehlberg, Cash-Karp and Dormand-Prince. Also, read the article on Adaptive Stepsize.

Usage

The following is an example usage of a two-stage explicit Runge-Kutta method:

 

0

   
 

2/3

2/3

 
   

1/4

3/4

       

to solve the initial-value problem

with step size h=0.025.

The tableau above yields the equivalent corresponding equations below defining the method:

k1 = yn

t0 = 1

     
 

y0 = 1

   

t1 = 1.025

     
 

k1 = y0 = 1

f(t0,k1) = 2.557407725

k2 = y0 + 2 / 3hf(t0,k1) = 1.042623462

 

y1 = y0 + h(1 / 4 f(t0,k1) + 3 / 4 f(t0 + 2 / 3h,k2)) = 1.066869388

t2 = 1.05

     
 

k1 = y1 = 1.066869388

f(t1,k1) = 2.813524695

k2 = y1 + 2 / 3hf(t1,k1) = 1.113761467

 

y2 = y1 + h(1 / 4 f(t1,k1) + 3 / 4 f(t1 + 2 / 3h,k2)) = 1.141332181

t3 = 1.075

     
 

k1 = y2 = 1.141332181

f(t2,k1) = 3.183536647

k2 = y2 + 2 / 3hf(t2,k1) = 1.194391125

 

y3 = y2 + h(1 / 4 f(t2,k1) + 3 / 4 f(t2 + 2 / 3h,k2)) = 1.227417567

t4 = 1.1

     
 

k1 = y3 = 1.227417567

f(t3,k1) = 3.796866512

k2 = y3 + 2 / 3hf(t3,k1) = 1.290698676

 

y4 = y3 + h(1 / 4 f(t3,k1) + 3 / 4 f(t3 + 2 / 3h,k2)) = 1.335079087

       

The numerical solutions correspond to the underlined values. Note that f(ti,k1) has been calculated to avoid recalculation in the yis.

Adaptive Runge-Kutta Methods

The adaptive methods are designed to produce an estimate of the local truncation error of a single Runge-Kutta step. This is done by having two methods in the tableau, one with order p and one with order p − 1.

The lower-order step is given by

where the ki are the same as for the higher order method. Then the error is

which is O(hp). The Butcher Tableau for this kind of method is extended to give the values of :

 

0

         
 

c2

a21

       
 

c3

a31

a32

     
             
 

cs

as1

as2

 

as,s − 1

 
   

b1

b2

 

bs − 1

bs

             
             

The Runge-Kutta-Fehlberg method has two methods of orders 5 and 4. Its extended Butcher Tableau is:

 

0

           
 

1/4

1/4

         
 

3/8

3/32

9/32

       
 

12/13

1932/2197

−7200/2197

7296/2197

     
 

1

439/216

−8

3680/513

-845/4104

   
 

1/2

−8/27

2

−3544/2565

1859/4104

−11/40

 
   

16/135

0

6656/12825

28561/56430

−9/50

2/55

   

25/216

0

1408/2565

2197/4104

−1/5

0

               

However, the simplest adaptive Runge-Kutta method involves combining the Heun method, which is order 2, with the Euler method, which is order 1. Its extended Butcher Tableau is:

 

0

   
 

1

1

 
   

1/2

1/2

   

1

0

       

The error estimate is used to control the stepsize.

Other adaptive Runge-Kutta methods are the Bogacki-Shampine method (orders 3 and 2), the Cash-Karp method and the Dormand-Prince method (both with orders 5 and 4).

Implicit Runge-Kutta Methods

The implicit methods are more general than the explicit ones. The distinction shows up in the Butcher Tableau: for an implicit method, the coefficient matrix aij is not necessarily lower triangular:

The approximate solution to the initial value problem reflects the greater number of coefficients:

Due to the fullness of the matrix aij, the evaluation of each ki is now considerably involved and dependent on the specific function f(t,y). Despite the difficulties, implicit methods are of great importance due to their high (possibly unconditional) stability, which is especially important in the solution of partial differential equations. The simplest example of an implicit Runge-Kutta method is the backward Euler method:

The Butcher Tableau for this is simply:

It can be difficult to make sense of even this simple implicit method, as seen from the expression for k1:

In this case, the awkward expression above can be simplified by noting that

so that

from which

follows. Though simpler then the "raw" representation before manipulation, this is an implicit relation so that the actual solution is problem dependent. Multistep implicit methods have been used with success by some researchers. The combination of stability, higher order accuracy with fewer steps, and stepping that depends only on the previous value makes them attractive; however the complicated problem-specific implementation and the fact that ki must often be approximated iteratively means that they are not common.

References

J. C. Butcher, Numerical methods for ordinary differential equations, ISBN 0471967580

George E. Forsythe, Michael A. Malcolm, and Cleve B. Moler. Computer Methods for Mathematical Computations. Englewood Cliffs, NJ: Prentice-Hall, 1977. (See Chapter 6.)

Ernst Hairer, Syvert Paul Nørsett, and Gerhard Wanner. Solving ordinary differential equations I: Nonstiff problems, second edition. Berlin: Springer Verlag, 1993. ISBN 3-540-56670-8.

William H. Press, Brian P. Flannery, Saul A. Teukolsky, William T. Vetterling. Numerical Recipes in C. Cambridge, UK: Cambridge University Press, 1988. (See Sections 16.1 and 16.2.)

Kaw, Autar; Kalu, Egwu (2008), Numerical Methods with Applications (1st ed.), www.autarkaw.com.

Kendall E. Atkinson. An Introduction to Numerical Analysis. John Wiley & Sons - 1989

F. Cellier, E. Kofman. Continuous System Simulation. Springer Verlag, 2006. ISBN 0-387-26102-8.

External links

Runge-Kutta

Runge-Kutta 4th Order Method

Runge Kutta Method for O.D.E.'s

Numerical integration

   

First order methods

Euler method· Backward Euler· Semi-implicit Euler· Exponential Euler

   

Second order methods

Verlet integration· Velocity Verlet· Crank-Nicolson method· Beeman's algorithm· Midpoint method· Heun's method· Newmark-beta method· Leapfrog integration

   

Higher order methods

Runge-Kutta methods· List of Runge-Kutta methods· Linear multistep method

   

Retrieved from "http://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods"

Categories: Numerical differential equations | Runge-Kutta methods

This page was last modified on 28 November 2009 at 11:21.

Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. See Terms of Use for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.

Contact us

Privacy policy

About Wikipedia

Disclaimers

Higher Order Taylor Methods

Marcelo Julio Alvisio & Lisa Marie Danz

May 16, 2007

Introduction

Differential equations are one of the building blocks in science or engineering. Scientists aim to obtain numerical solutions to differential equations whenever explicit solutions do not exist or when they are too hard to find. These numerical solutions are approximated though a variety of methods, some of which we set out to explore in this project.

We require two conditions when computing differential equations numerically. First, we require that the solution is continuous with initial value. Otherwise, numerical error introduced in the representation of the number in computer systems would produce results very far from the actual solution. Second, we require that the solution changes continuously with respect to the differential equation itself. Otherwise, we cannot expect the method that approximates the differential equation to give accurate results.

The most common methods for computing differential equations numerically include Euler's method, Higher Order Taylor method and Runge-Kutta methods. In this project, we concentrate on the “Higher Order Taylor Method.” This method employs the Taylor polynomial of the solution to the equation. It approximates the zeroth order term by using the previous step's value (which is the initial condition for the first step), and the subsequent terms of the Taylor expansion by using the differential equation. We call it Higher Order Taylor Method, the “lower” order method being Euler's Method.

Under certain conditions, the Higher Order Taylor Method limits the error to O(hn), where n is the order used. We will present several examples to test this idea. We will look into two main parameters as a measure of the effectiveness of the method, namely accuracy and efficiency.

Theory of the Higher Order Taylor Method

Definition 2.1 Consider the differential equation given by y0(t)= f(t,y), y(a)=

c. Then for b>a, the nth order Taylor approximation to y(b) with K steps is given by yK, where {yi} is defined recursively as:

t0 = a

y0 = y(a)= c

ti+1 = ti + h

h2 ∂f hn ∂n−1f

yi+1 = yi + hf(ti,yi)+ (ti,yi)+ ... +(ti,yi)

2 ∂t n! ∂tn−1

with h =(b − a)/K.

It makes sense to formulate such a definition in view of the Taylor series expansion that is used when y(t) is known explicitly. All we have done is use f(t,y) for y0(t), ft(t,y) for y00(t), and so forth. The next task is to estimate the error that this approximation introduces.

We know by Taylor's Theorem that, for any solution that admits a Taylor expansion at the point ti, we have h2 hn h(n+1) y(ti+1)= y(ti)+ hy0(ti)+ y00(ti)+ ... + y(n)(ti)+ y(n+1)(σ)

2 n!(n + 1)!

where σ is between ti and ti+1

Using y0 = f(t,y), this translates to

h2 ∂f hn ∂(n−1)fh(n+1) ∂(n)f y(ti+1)= y(ti)+hf(ti,yi)+ (ti,yi)+...+(ti,yi)+ (σ,y(σ))

2 ∂t n! ∂t(n−1) (n + 1)! ∂t(n)

Therefore, the local error, that is to say, the error introduced at each step if the values calculated previously were exact, is given by:

1 ∂(n)f

Ei =(hn+1)(σ,y(σ))

(n + 1)! ∂tn which means that

1 ∂(n)f

max (hn+1)(σ,y(σ))

Ei ≤ σ∈[a,b] (n + 1)! ∂tn 23

We can say Ei = O(hn+1). Now, since the number of steps from a to b is proportional to 1/h, we multiply the error per step by the number of steps to find a total error

E = O(hn).

In Practice: Examples

We will consider differential equations that we can solve explicitly to obtain an equation for y(t) such that y0(t)= f(t,y). This way, we can calculate the actual error by subtracting the exact value for y(b) from the value that the Higher Order Taylor method predicts for it. To approximate values in the following examples, the derivatives of f(t,y) were computed by hand. MATLAB then performed the iteration and arrived at the approximation.

Notice that the definitions given in the previous section could also have been adapted for varying step size h. However, for ease of computation we have kept the step size constant. In our computations, we have chosen step size of (b − a)/2k, which resulted in K =2k evenly spaced points in the interval.

Example 3.1 We consider the differential equation

1+ t

y0(t)= f(t,y)=

1+ y

with initial condition y(1) = 2. It is clear that y(t)= √t2 +2t +6 − 1 solves this equation.

Thus we calculate the error for y(2) by subtracting the approximation of y(2) from y(2), which is the exact value. Recall that we are using h =2−k because (b − a)=1. The following table displays the errors calculated. k = 1

k = 2

k = 3

k = 4

   

order = 1

.0333

.0158

.0077

.0038

order = 2

−.0038

−.0009

−.0002

−.0001

order = 3

.0003269

.0000383

.0000046

.0000006

Runge-Kutta Methods

The Taylor methods in the preceding section have the desirable feature that the F.G.E. is of order O(hN ), and N can be chosen large so that this error is small. However, the shortcomings of the Taylor methods are the a priori determination of N and the computation of the higher derivatives, which can be very complicated. Each Runge-Kutta method is derived from an appropriate Taylor method in such a way that the F.G.E. is of order O(hN ). A trade-off is made to perform several function evaluations at each step and eliminate the necessity to compute the higher derivatives. These methods can be constructed for any order N. The Runge-Kutta method of order N = 4 is most popular.

It is a good choice for common purposes because it is quite accurate, stable, and easy to program. Most authorities proclaim that it is not necessary to go to a higher-order method because the increased accuracy is offset by additional computational effort. If more accuracy is required, then either a smaller step size or an adaptive method should be used.

The fourth-order Runge-Kutta method (RK4) simulates the accuracy of the Taylor series method of order N = 4. The method is based on computing yk+1 as follows:

(1) yk+1 = yk + w1k1 + w2k2 + w3k3 + w4k4,

where k1, k2, k3, and k4 have the form

(2)

k1 = h f (tk , yk ),

k2 = h f (tk + a1h, yk + b1k1),

k3 = h f (tk + a2h, yk + b2k1 + b3k2),

k4 = h f (tk + a3h, yk + b4k1 + b5k2 + b6k3).

By matching coefficients with those of the Taylor series method of order N = 4 so that the local truncation error is of order O(h5), Runge and Kutta were able to obtain the 490 CHAP. 9 SOLUTION OF DIFFERENTIAL EQUATIONS

following system of equations:

(3)

b1 = a1,

b2 + b3 = a2,

b4 + b5 + b6 = a3,

w1 + w2 + w3 + w4 = 1,

w2a1 + w3a2 + w4a3 = 1

2,

w2a2

1

+ w3a2

2

+ w4a2

3

= 1

3

,

w2a3

1

+ w3a3

2

+ w4a3

3

= 1

4

,

w3a1b3 + w4(a1b5 + a2b6) = 1

6

,

w3a1a2b3 + w4a3(a1b5 + a2b6) = 1

8

,

w3a2

1b3 + w4(a2

1b5 + a2

2b6) = 1

12

, w4a1b3b6 = 1

24

The system involves 11 equations in 13 unknowns. Two additional conditions must be

supplied to solve the system. The most useful choice is

(4) a1 = 1

2 and b2 = 0.

Then the solution for the remaining variables is

(5)

a2 = 1

2

, a3 = 1, b1 = 1

2

, b3 = 1

2

, b4 = 0, b5 = 0, b6 = 1,

w1 = 1

6

, w2 = 1

3

, w3 = 1

3

, w4 = 1

6

The values in (4) and (5) are substituted into (2) and (1) to obtain the formula for the standard Runge-Kutta method of order N = 4, which is stated as follows. Start with the initial point (t0, y0) and generate the sequence of approximations using

(6) yk+1 = yk + h( f1 + 2 f2 + 2 f3 + f4)

6

,

SEC. 9.5 RUNGE-KUTTA METHODS 491

where

(7)

f1 = f (tk , yk ),

f2 = f

tk + h

2

, yk + h

2

f1

,

f3 = f

tk + h

2

, yk + h

2

f2

,

f4 = f (tk + h, yk + h f3).

Discussion about the Method

The complete development of the equations in (7) is beyond the scope of this book and can be found in advanced texts, but we can get some insights. Consider the graph of the solution curve y = y(t) over the first subinterval [t0, t1]. The function values in (7) are approximations for slopes to this curve. Here f1 is the slope at the left, f2 and f3 are two estimates for the slope in the middle, and f4 is the slope at the right (a)). The next point (t1, y1) is obtained by integrating the slope function

(8) y(t1) − y(t0) =

_ t1

t0

f (t, y(t)) dt.

If Simpson's rule is applied with step size h/2, the approximation to the integral

in (8) is

(9)

_ t1

t0

f (t, y(t)) dt ≈ h

6

( f (t0, y(t0)) + 4 f (t1/2, y(t1/2)) + f (t1, y(t1))),

where t1/2 is the midpoint of the interval. Three function values are needed; hence we

make the obvious choice f (t0, y (t0)) = f1 and f (t1, y(t1)) ≈ f4. For the value in the

middle we chose the average of f2 and f3:

f (t1/2, y(t1/2)) ≈ f2 + f3

2

.

These values are substituted into (9), which is used in equation (8) to get y1:

(10) y1 = y0 + h

6

f1 + 4( f2 + f3)

2

+ f4

. When this formula is simplified, it is seen to be equation (6) with k = 0. The graph for the integral in (9) is shown in Figure 9.9(b).

492 CHAP. 9 SOLUTION OF DIFFERENTIAL EQUATIONS

y

t

m1 = f1

m2 = f3

m3 = f4

m4 = f4

(t0, y0)

y = y(t) (t1, y(t1))

t0 t1/2 t1

(a) Predicted slopes mj to the

solution curve y = y(t)

z

t

(t0, f1)

(t1/2, f2)

(t1/2, f3)

(t1, f4)

t0 t1/2 t1

(b) Integral approximation:

h

6

y(t1) − y0 = ( f1 + 2f2 + 2f3 + f4)

Figure 9.9 The graphs y = y(t) and z = f (t, y(t)) in the discussion of the Runge-Kutta

method of order N = 4.

Step Size versus Error

The error term for Simpson's rule with step size h/2 is

(11) −y(4)(c1)

h5

2880

.

If the only error at each step is that given in (11), after M steps the accumulated error for the RK4 method would be

(12) −

_M

k=1

y(4)(ck)

h5

2880

≈ b − a

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Request Removal

If you are the original writer of this dissertation and no longer wish to have the dissertation published on the UK Essays website then please click on the link below to request removal:


More from UK Essays

Get help with your dissertation
Find out more