engineering

The engineering dissertation below has been submitted to us by a student in order to help you with your studies.

The Numerical Differential Equation Analysis package combines functionality for analyzing differential equations using Butcher trees, Gaussian quadrature, and Newton-Cotes quadrature.

Butcher

Runge-Kutta methods are useful for numerically solving certain types of ordinary differential equations. Deriving high-order Runge-Kutta methods is no easy task, however. There are several reasons for this. The first difficulty is in finding the so-called order conditions. These are nonlinear equations in the coefficients for the method that must be satisfied to make the error in the method of order O (hn) for some integer n where h is the step size. The second difficulty is in solving these equations. Besides being nonlinear, there is generally no unique solution, and many heuristics and simplifying assumptions are usually made. Finally, there is the problem of combinatorial explosion. For a twelfth-order method there are 7813 order conditions!

This package performs the first task: finding the order conditions that must be satisfied. The result is expressed in terms of unknown coefficients aij, bj, and ci. The s-stage Runge-Kutta method to advance from x to x+h is then

where

Sums of the elements in the rows of the matrix [aij] occur repeatedly in the conditions imposed on aij and bj. In recognition of this and as a notational convenience it is usual to introduce the coefficients ci and the definition

This definition is referred to as the row-sum condition and is the first in a sequence of row-simplifying conditions.

If aij=0 for all i≤j the method is explicit; that is, each of the Yi (x+h) is defined in terms of previously computed values. If the matrix [aij] is not strictly lower triangular, the method is implicit and requires the solution of a (generally nonlinear) system of equations for each timestep. A diagonally implicit method has aij=0 for all i<j.

There are several ways to express the order conditions. If the number of stages s is specified as a positive integer, the order conditions are expressed in terms of sums of explicit terms. If the number of stages is specified as a symbol, the order conditions will involve symbolic sums. If the number of stages is not specified at all, the order conditions will be expressed in stage-independent tensor notation. In addition to the matrix a and the vectors b and c, this notation involves the vector e, which is composed of all ones. This notation has two distinct advantages: it is independent of the number of stages s and it is independent of the particular Runge-Kutta method.

For further details of the theory see the references.

ai,j

the coefficient of f(Yj(x)) in the formula for Yi(x) of the method

bj

the coefficient of f(Yj(x)) in the formula for Y(x) of the method

ci

a notational convenience for aij

e

a notational convenience for the vector (1, 1, 1, ...)

Notation used by functions for Butcher.

RungeKuttaOrderConditions[p,s]

give a list of the order conditions that any s-stage Runge-Kutta method of order p must satisfy

ButcherPrincipalError[p,s]

give a list of the order p+1 terms appearing in the Taylor series expansion of the error for an order-p, s-stage Runge-Kutta method

RungeKuttaOrderConditions[p], ButcherPrincipalError[p]

give the result in stage-independent tensor notation

Functions associated with the order conditions of Runge-Kutta methods.

ButcherRowSum

specify whether the row-sum conditions for the ci should be explicitly included in the list of order conditions

ButcherSimplify

specify whether to apply Butcher's row and column simplifying assumptions

Some options for RungeKuttaOrderConditions.

This gives the number of order conditions for each order up through order 10. Notice the combinatorial explosion.

In[2]:=

Out[2]=

This gives the order conditions that must be satisfied by any first-order, 3-stage Runge-Kutta method, explicitly including the row-sum conditions.

In[3]:=

Out[3]=

These are the order conditions that must be satisfied by any second-order, 3-stage Runge-Kutta method. Here the row-sum conditions are not included.

In[4]:=

Out[4]=

It should be noted that the sums involved on the left-hand sides of the order conditions will be left in symbolic form and not expanded if the number of stages is left as a symbolic argument. This will greatly simplify the results for high-order, many-stage methods. An even more compact form results if you do not specify the number of stages at all and the answer is given in tensor form.

These are the order conditions that must be satisfied by any second-order, s-stage method.

In[5]:=

Out[5]=

Replacing s by 3 gives the same result asRungeKuttaOrderConditions.

In[6]:=

Out[6]=

These are the order conditions that must be satisfied by any second-order method. This uses tensor notation. The vector e is a vector of ones whose length is the number of stages.

In[7]:=

Out[7]=

The tensor notation can likewise be expanded to give the conditions in full.

In[8]:=

Out[8]=

These are the principal error coefficients for any third-order method.

In[9]:=

Out[9]=

This is a bound on the local error of any third-order method in the limit as h approaches 0, normalized to eliminate the effects of the ODE.

In[10]:=

Out[10]=

Here are the order conditions that must be satisfied by any fourth-order, 1-stage Runge-Kutta method. Note that there is no possible way for these order conditions to be satisfied; there need to be more stages (the second argument must be larger) for there to be sufficiently many unknowns to satisfy all of the conditions.

In[11]:=

Out[11]=

RungeKuttaMethod

specify the type of Runge-Kutta method for which order conditions are being sought

Explicit

a setting for the option RungeKuttaMethod specifying that the order conditions are to be for an explicit Runge-Kutta method

DiagonallyImplicit

a setting for the option RungeKuttaMethod specifying that the order conditions are to be for a diagonally implicit Runge-Kutta method

Implicit

a setting for the option RungeKuttaMethod specifying that the order conditions are to be for an implicit Runge-Kutta method

$RungeKuttaMethod

a global variable whose value can be set to Explicit, DiagonallyImplicit, or Implicit

Controlling the type of Runge-Kutta method in RungeKuttaOrderConditions and related functions.

RungeKuttaOrderConditions and certain related functions have the option RungeKuttaMethod with default setting $RungeKuttaMethod. Normally you will want to determine the Runge-Kutta method being considered by setting $RungeKuttaMethod to one of Implicit, DiagonallyImplicit, and Explicit, but you can specify an option setting or even change the default for an individual function.

These are the order conditions that must be satisfied by any second-order, 3-stage diagonally implicit Runge-Kutta method.

In[12]:=

Out[12]=

An alternative (but less efficient) way to get a diagonally implicit method is to force a to be lower triangular by replacing upper-triangular elements with 0.

In[13]:=

Out[13]=

These are the order conditions that must be satisfied by any third-order, 2-stage explicit Runge-Kutta method. The contradiction in the order conditions indicates that no such method is possible, a result which holds for any explicit Runge-Kutta method when the number of stages is less than the order.

In[14]:=

Out[14]=

ButcherColumnConditions[p,s]

give the column simplifying conditions up to and including order p for s stages

ButcherRowConditions[p,s]

give the row simplifying conditions up to and including order p for s stages

ButcherQuadratureConditions[p,s]

give the quadrature conditions up to and including order p for s stages

ButcherColumnConditions[p], ButcherRowConditions[p], etc.

give the result in stage-independent tensor notation

More functions associated with the order conditions of Runge-Kutta methods.

Butcher showed that the number and complexity of the order conditions can be reduced considerably at high orders by the adoption of so-called simplifying assumptions. For example, this reduction can be accomplished by adopting sufficient row and column simplifying assumptions and quadrature-type order conditions. The option ButcherSimplify in RungeKuttaOrderConditions can be used to determine these automatically.

These are the column simplifying conditions up to order 4.

In[15]:=

Out[15]=

These are the row simplifying conditions up to order 4.

In[16]:=

Out[16]=

These are the quadrature conditions up to order 4.

In[17]:=

Out[17]=

Trees are fundamental objects in Butcher's formalism. They yield both the derivative in a power series expansion of a Runge-Kutta method and the related order constraint on the coefficients. This package provides a number of functions related to Butcher trees.

f

the elementary symbol used in the representation of Butcher trees

ButcherTrees[p]

give a list, partitioned by order, of the trees for any Runge-Kutta method of order p

ButcherTreeSimplify[p,Eta,Xi]

give the set of trees through order p that are not reduced by Butcher's simplifying assumptions, assuming that the quadrature conditions through order p, the row simplifying conditions through order Eta, and the column simplifying conditions through order Xiall hold. The result is grouped by order, starting with the first nonvanishing trees

ButcherTreeCount[p]

give a list of the number of trees through order p

ButcherTreeQ[tree]

give True if the tree or list of trees tree is valid functional syntax, and False otherwise

Constructing and enumerating Butcher trees.

This gives the trees that are needed for any third-order method. The trees are represented in a functional form in terms of the elementary symbol f.

In[18]:=

Out[18]=

This tests the validity of the syntax of two trees. Butcher trees must be constructed using multiplication, exponentiation or application of the function f.

In[19]:=

Out[19]=

This evaluates the number of trees at each order through order 10. The result is equivalent to Out[2] but the calculation is much more efficient since it does not actually involve constructing order conditions or trees.

In[20]:=

Out[20]=

The previous result can be used to calculate the total number of trees required at each order through order10.

In[21]:=

Out[21]=

The number of constraints for a method using row and column simplifying assumptions depends upon the number of stages. ButcherTreeSimplify gives the Butcher trees that are not reduced assuming that these assumptions hold.

This gives the additional trees that are necessary for a fourth-order method assuming that the quadrature conditions through order 4 and the row and column simplifying assumptions of order 1 hold. The result is a single tree of order 4 (which corresponds to a single fourth-order condition).

In[22]:=

Out[22]=

It is often useful to be able to visualize a tree or forest of trees graphically. For example, depicting trees yields insight, which can in turn be used to aid in the construction of Runge-Kutta methods.

ButcherPlot[tree]

give a plot of the tree tree

ButcherPlot[{tree1,tree2,...}]

give an array of plots of the trees in the forest {tree1, tree2,...}

Drawing Butcher trees.

ButcherPlotColumns

specify the number of columns in the GraphicsGrid plot of a list of trees

ButcherPlotLabel

specify a list of plot labels to be used to label the nodes of the plot

ButcherPlotNodeSize

specify a scaling factor for the nodes of the trees in the plot

ButcherPlotRootSize

specify a scaling factor for the highlighting of the root of each tree in the plot; a zero value does not highlight roots

Options to ButcherPlot.

This plots and labels the trees through order 4.

In[23]:=

Out[23]=

In addition to generating and drawing Butcher trees, many functions are provided for measuring and manipulating them. For a complete description of the importance of these functions, see Butcher.

ButcherHeight[tree]

give the height of the tree tree

ButcherWidth[tree]

give the width of the tree tree

ButcherOrder[tree]

give the order, or number of vertices, of the tree tree

ButcherAlpha[tree]

give the number of ways of labeling the vertices of the tree tree with a totally ordered set of labels such that if (m, n) is an edge, then m<n

ButcherBeta[tree]

give the number of ways of labeling the tree tree with ButcherOrder[tree]-1 distinct labels such that the root is not labeled, but every other vertex is labeled

ButcherBeta[n,tree]

give the number of ways of labeling n of the vertices of the tree with n distinct labels such that every leaf is labeled and the root is not labeled

ButcherBetaBar[tree]

give the number of ways of labeling the tree tree with ButcherOrder[tree] distinct labels such that every node, including the root, is labeled

ButcherBetaBar[n,tree]

give the number of ways of labeling n of the vertices of the tree with n distinct labels such that every leaf is labeled

ButcherGamma[tree]

give the density of the tree tree; the reciprocal of the density is the right-hand side of the order condition imposed by tree

ButcherPhi[tree,s]

give the weight of the tree tree; the weight CapitalPhi(tree) is the left-hand side of the order condition imposed by tree

ButcherPhi[tree]

give CapitalPhi(tree) using tensor notation

ButcherSigma[tree]

give the order of the symmetry group of isomorphisms of the tree tree with itself

Other functions associated with Butcher trees.

This gives the order of the tree f[f[f[f] f^2]].

In[24]:=

Out[24]=

This gives the density of the tree f[f[f[f] f^2]].

In[25]:=

Out[25]=

This gives the elementary weight function imposed by f[f[f[f] f^2]] for an s-stage method.

In[26]:=

Out[26]=

The subscript notation is a formatting device and the subscripts are really just the indexed variable NumericalDifferentialEquationAnalysis`Private`$i.

In[27]:=

Out[27]//FullForm=

It is also possible to obtain solutions to the order conditions using Solve and related functions. Many issues related to the construction Runge-Kutta methods using this package can be found in Sofroniou. The article also contains details concerning algorithms used in Butcher.m and discusses applications.

Gaussian Quadrature

As one of its methods, the Mathematica function NIntegrate uses a fairly sophisticated Gauss-Kronrod-based algorithm. The Gaussian quadrature functionality provided in Numerical Differential Equation Analysis allows you to easily study some of the theory behind ordinary Gaussian quadrature which is a little less sophisticated.

The basic idea behind Gaussian quadrature is to approximate the value if an integral as a linear combination of values of the integrand evaluated at specific points:

Since there are 2n free parameters to be chosen (both the abscissas xi and the weights wi) and since both integration and the sum are linear operations, you can expect to be able to make the formula correct for all polynomials of degree less than about 2n. In addition to knowing what the optimal abscissas and weights are, it is often desirable to know how large the error in the approximation will be. This package allows you to answer both of these questions.

GaussianQuadratureWeights[n,a,b]

give a list of the pairs (xi, wi) to machine precision for quadrature on the interval a to b

GaussianQuadratureError[n,f,a,b]

give the error to machine precision

GaussianQuadratureWeights[n,a,b,prec]

give a list of the pairs (xi, wi) to precision prec

GaussianQuadratureError[n,f,a,b,prec]

give the error to precision prec

Finding formulas for Gaussian quadrature.

This gives the abscissas and weights for the five-point Gaussian quadrature formula on the interval (-3, 7).

In[2]:=

Out[2]=

Here is the error in that formula. Unfortunately it involves the tenth derivative of f at an unknown point so you don't really know what the error itself is.

In[3]:=

Out[3]=

You can see that the error decreases rapidly with the length of the interval.

In[4]:=

Out[4]=

Newton-Cotes

As one of its methods, the Mathematica function NIntegrate uses a fairly sophisticated Gauss-Kronrod based algorithm. Other types of quadrature formulas exist, each with their own advantages. For example, Gaussian quadrature uses values of the integrand at oddly spaced abscissas. If you want to integrate a function presented in tabular form at equally spaced abscissas, it won't work very well. An alternative is to use Newton-Cotes quadrature.

The basic idea behind Newton-Cotes quadrature is to approximate the value of an integral as a linear combination of values of the integrand evaluated at equally spaced points:

In addition, there is the question of whether or not to include the end points in the sum. If they are included, the quadrature formula is referred to as a closed formula. If not, it is an open formula. If the formula is open there is some ambiguity as to where the first abscissa is to be placed. The open formulas given in this package have the first abscissa one half step from the lower end point.

Since there are n free parameters to be chosen (the weights) and since both integration and the sum are linear operations, you can expect to be able to make the formula correct for all polynomials of degree less than about n. In addition to knowing what the weights are, it is often desirable to know how large the error in the approximation will be. This package allows you to answer both of these questions.

NewtonCotesWeights[n,a,b]

give a list of the n pairs (xi, wi) for quadrature on the interval a to b

NewtonCotesError[n,f,a,b]

give the error in the formula

Finding formulas for Newton-Cotes quadrature.

option name

default value

QuadratureType

Closed

the type of quadrature, Open or Closed

Option for NewtonCotesWeights and NewtonCotesError.

Here are the abscissas and weights for the five-point closed Newton-Cotes quadrature formula on the interval (-3, 7).

In[2]:=

Out[2]=

Here is the error in that formula. Unfortunately it involves the sixth derivative of f at an unknown point so you don't really know what the error itself is.

In[3]:=

Out[3]=

You can see that the error decreases rapidly with the length of the interval.

In[4]:=

Out[4]=

This gives the abscissas and weights for the five-point open Newton-Cotes quadrature formula on the interval (-3, 7).

In[5]:=

Out[5]=

Here is the error in that formula.

In[6]:=

Out[6]=

Runge-Kutta Methods

From Wikipedia, The Free Encyclopedia

Jump to: navigation, search

In numerical analysis, the Runge-Kutta methods (German pronunciation:[ˌʀʊŋəˈkʊta]) are an important family of implicit and explicit iterative methods for the approximation of solutions of ordinary differential equations. These techniques were developed around 1900 by the German mathematicians C. Runge and M.W. Kutta.

See the article on numerical ordinary differential equations for more background and other methods. See also List of Runge-Kutta methods.

Contents

1 The common fourth-order Runge-Kutta method

2 Explicit Runge-Kutta methods

o 2.1 Examples

3 Usage

4 Adaptive Runge-Kutta methods

5 Implicit Runge-Kutta methods

6 References

7 External links

The Common Fourth-Order Runge-Kutta Method

One member of the family of Runge-Kutta methods is so commonly used that it is often referred to as "RK4", "classical Runge-Kutta method" or simply as "the Runge-Kutta method".

Let an initial value problem be specified as follows.

Then, the RK4 method for this problem is given by the following equations:

where yn + 1 is the RK4 approximation of y(tn + 1), and

Thus, the next value (yn + 1) is determined by the present value (yn) plus the product of the size of the interval (h) and an estimated slope. The slope is a weighted average of slopes:

k1 is the slope at the beginning of the interval;

k2 is the slope at the midpoint of the interval, using slope k1 to determine the value of y at the point tn + h / 2 using Euler's method;

k3 is again the slope at the midpoint, but now using the slope k2 to determine the y-value;

k4 is the slope at the end of the interval, with its y-value determined using k3.

In averaging the four slopes, greater weight is given to the slopes at the midpoint:

The RK4 method is a fourth-order method[needs reference], meaning that the error per step is on the order of h5, while the total accumulated error has order h4.

Note that the above formulae are valid for both scalar- and vector-valued functions (i.e., y can be a vector and f an operator). For example one can integrate Schrödinger's equation using the Hamiltonian operator as function f.

Explicit Runge-Kutta Methods

The family of explicit Runge-Kutta methods is a generalization of the RK4 method mentioned above. It is given by

where

(Note: the above equations have different but equivalent definitions in different texts).

To specify a particular method, one needs to provide the integer s (the number of stages), and the coefficients aij (for 1 ≤ j < i ≤ s), bi (for i = 1, 2, ..., s) and ci (for i = 2, 3, ..., s). These data are usually arranged in a mnemonic device, known as a Butcher tableau (after John C. Butcher):

0

c2

a21

c3

a31

a32

cs

as1

as2

as,s − 1

b1

b2

bs − 1

bs

The Runge-Kutta method is consistent if

There are also accompanying requirements if we require the method to have a certain order p, meaning that the truncation error is O(hp+1). These can be derived from the definition of the truncation error itself. For example, a 2-stage method has order 2 if b1 + b2 = 1, b2c2 = 1/2, and b2a21 = 1/2.

Examples

The RK4 method falls in this framework. Its tableau is:

0

1/2

1/2

1/2

0

1/2

1

0

0

1

1/6

1/3

1/3

1/6

However, the simplest Runge-Kutta method is the (forward) Euler method, given by the formula yn + 1 = yn + hf(tn,yn). This is the only consistent explicit Runge-Kutta method with one stage. The corresponding tableau is:

0

1

An example of a second-order method with two stages is provided by the midpoint method

The corresponding tableau is:

0

1/2

1/2

0

1

Note that this 'midpoint' method is not the optimal RK2 method. An alternative is provided by Heun's method, where the 1/2's in the tableau above are replaced by 1's and the b's row is [1/2, 1/2]. If one wants to minimize the truncation error, the method below should be used (Atkinson p.423). Other important methods are Fehlberg, Cash-Karp and Dormand-Prince. Also, read the article on Adaptive Stepsize.

Usage

The following is an example usage of a two-stage explicit Runge-Kutta method:

0

2/3

2/3

1/4

3/4

to solve the initial-value problem

with step size h=0.025.

The tableau above yields the equivalent corresponding equations below defining the method:

k1 = yn

t0 = 1

y0 = 1

t1 = 1.025

k1 = y0 = 1

f(t0,k1) = 2.557407725

k2 = y0 + 2 / 3hf(t0,k1) = 1.042623462

y1 = y0 + h(1 / 4 f(t0,k1) + 3 / 4 f(t0 + 2 / 3h,k2)) = 1.066869388

t2 = 1.05

k1 = y1 = 1.066869388

f(t1,k1) = 2.813524695

k2 = y1 + 2 / 3hf(t1,k1) = 1.113761467

y2 = y1 + h(1 / 4 f(t1,k1) + 3 / 4 f(t1 + 2 / 3h,k2)) = 1.141332181

t3 = 1.075

k1 = y2 = 1.141332181

f(t2,k1) = 3.183536647

k2 = y2 + 2 / 3hf(t2,k1) = 1.194391125

y3 = y2 + h(1 / 4 f(t2,k1) + 3 / 4 f(t2 + 2 / 3h,k2)) = 1.227417567

t4 = 1.1

k1 = y3 = 1.227417567

f(t3,k1) = 3.796866512

k2 = y3 + 2 / 3hf(t3,k1) = 1.290698676

y4 = y3 + h(1 / 4 f(t3,k1) + 3 / 4 f(t3 + 2 / 3h,k2)) = 1.335079087

The numerical solutions correspond to the underlined values. Note that f(ti,k1) has been calculated to avoid recalculation in the yis.

Adaptive Runge-Kutta Methods

The adaptive methods are designed to produce an estimate of the local truncation error of a single Runge-Kutta step. This is done by having two methods in the tableau, one with order p and one with order p − 1.

The lower-order step is given by

where the ki are the same as for the higher order method. Then the error is

which is O(hp). The Butcher Tableau for this kind of method is extended to give the values of :

0

c2

a21

c3

a31

a32

cs

as1

as2

as,s − 1

b1

b2

bs − 1

bs

The Runge-Kutta-Fehlberg method has two methods of orders 5 and 4. Its extended Butcher Tableau is:

0

1/4

1/4

3/8

3/32

9/32

12/13

1932/2197

−7200/2197

7296/2197

1

439/216

−8

3680/513

-845/4104

1/2

−8/27

2

−3544/2565

1859/4104

−11/40

16/135

0

6656/12825

28561/56430

−9/50

2/55

25/216

0

1408/2565

2197/4104

−1/5

0

However, the simplest adaptive Runge-Kutta method involves combining the Heun method, which is order 2, with the Euler method, which is order 1. Its extended Butcher Tableau is:

0

1

1

1/2

1/2

1

0

The error estimate is used to control the stepsize.

Other adaptive Runge-Kutta methods are the Bogacki-Shampine method (orders 3 and 2), the Cash-Karp method and the Dormand-Prince method (both with orders 5 and 4).

Implicit Runge-Kutta Methods

The implicit methods are more general than the explicit ones. The distinction shows up in the Butcher Tableau: for an implicit method, the coefficient matrix aij is not necessarily lower triangular:

The approximate solution to the initial value problem reflects the greater number of coefficients:

Due to the fullness of the matrix aij, the evaluation of each ki is now considerably involved and dependent on the specific function f(t,y). Despite the difficulties, implicit methods are of great importance due to their high (possibly unconditional) stability, which is especially important in the solution of partial differential equations. The simplest example of an implicit Runge-Kutta method is the backward Euler method:

The Butcher Tableau for this is simply:

It can be difficult to make sense of even this simple implicit method, as seen from the expression for k1:

In this case, the awkward expression above can be simplified by noting that

so that

from which

follows. Though simpler then the "raw" representation before manipulation, this is an implicit relation so that the actual solution is problem dependent. Multistep implicit methods have been used with success by some researchers. The combination of stability, higher order accuracy with fewer steps, and stepping that depends only on the previous value makes them attractive; however the complicated problem-specific implementation and the fact that ki must often be approximated iteratively means that they are not common.

References

J. C. Butcher, Numerical methods for ordinary differential equations, ISBN 0471967580

George E. Forsythe, Michael A. Malcolm, and Cleve B. Moler. Computer Methods for Mathematical Computations. Englewood Cliffs, NJ: Prentice-Hall, 1977. (See Chapter 6.)

Ernst Hairer, Syvert Paul Nørsett, and Gerhard Wanner. Solving ordinary differential equations I: Nonstiff problems, second edition. Berlin: Springer Verlag, 1993. ISBN 3-540-56670-8.

William H. Press, Brian P. Flannery, Saul A. Teukolsky, William T. Vetterling. Numerical Recipes in C. Cambridge, UK: Cambridge University Press, 1988. (See Sections 16.1 and 16.2.)

Kaw, Autar; Kalu, Egwu (2008), Numerical Methods with Applications (1st ed.), www.autarkaw.com.

Kendall E. Atkinson. An Introduction to Numerical Analysis. John Wiley & Sons - 1989

F. Cellier, E. Kofman. Continuous System Simulation. Springer Verlag, 2006. ISBN 0-387-26102-8.

External links

Runge-Kutta

Runge-Kutta 4th Order Method

Runge Kutta Method for O.D.E.'s

Numerical integration

First order methods

Euler method· Backward Euler· Semi-implicit Euler· Exponential Euler

Second order methods

Verlet integration· Velocity Verlet· Crank-Nicolson method· Beeman's algorithm· Midpoint method· Heun's method· Newmark-beta method· Leapfrog integration

Higher order methods

Runge-Kutta methods· List of Runge-Kutta methods· Linear multistep method

Retrieved from "http://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods"

Categories: Numerical differential equations | Runge-Kutta methods

This page was last modified on 28 November 2009 at 11:21.

Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. See Terms of Use for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.

Contact us

Privacy policy

About Wikipedia

Disclaimers

Higher Order Taylor Methods

Marcelo Julio Alvisio & Lisa Marie Danz

May 16, 2007

Introduction

Differential equations are one of the building blocks in science or engineering. Scientists aim to obtain numerical solutions to differential equations whenever explicit solutions do not exist or when they are too hard to find. These numerical solutions are approximated though a variety of methods, some of which we set out to explore in this project.

We require two conditions when computing differential equations numerically. First, we require that the solution is continuous with initial value. Otherwise, numerical error introduced in the representation of the number in computer systems would produce results very far from the actual solution. Second, we require that the solution changes continuously with respect to the differential equation itself. Otherwise, we cannot expect the method that approximates the differential equation to give accurate results.

The most common methods for computing differential equations numerically include Euler's method, Higher Order Taylor method and Runge-Kutta methods. In this project, we concentrate on the “Higher Order Taylor Method.” This method employs the Taylor polynomial of the solution to the equation. It approximates the zeroth order term by using the previous step's value (which is the initial condition for the first step), and the subsequent terms of the Taylor expansion by using the differential equation. We call it Higher Order Taylor Method, the “lower” order method being Euler's Method.

Under certain conditions, the Higher Order Taylor Method limits the error to O(hn), where n is the order used. We will present several examples to test this idea. We will look into two main parameters as a measure of the effectiveness of the method, namely accuracy and efficiency.

Theory of the Higher Order Taylor Method

Definition 2.1 Consider the differential equation given by y0(t)= f(t,y), y(a)=

c. Then for b>a, the nth order Taylor approximation to y(b) with K steps is given by yK, where {yi} is defined recursively as:

t0 = a

y0 = y(a)= c

ti+1 = ti + h

h2 ∂f hn ∂n−1f

yi+1 = yi + hf(ti,yi)+ (ti,yi)+ ... +(ti,yi)

2 ∂t n! ∂tn−1

with h =(b − a)/K.

It makes sense to formulate such a definition in view of the Taylor series expansion that is used when y(t) is known explicitly. All we have done is use f(t,y) for y0(t), ft(t,y) for y00(t), and so forth. The next task is to estimate the error that this approximation introduces.

We know by Taylor's Theorem that, for any solution that admits a Taylor expansion at the point ti, we have h2 hn h(n+1) y(ti+1)= y(ti)+ hy0(ti)+ y00(ti)+ ... + y(n)(ti)+ y(n+1)(σ)

2 n!(n + 1)!

where σ is between ti and ti+1

Using y0 = f(t,y), this translates to

h2 ∂f hn ∂(n−1)fh(n+1) ∂(n)f y(ti+1)= y(ti)+hf(ti,yi)+ (ti,yi)+...+(ti,yi)+ (σ,y(σ))

2 ∂t n! ∂t(n−1) (n + 1)! ∂t(n)

Therefore, the local error, that is to say, the error introduced at each step if the values calculated previously were exact, is given by:

1 ∂(n)f

Ei =(hn+1)(σ,y(σ))

(n + 1)! ∂tn which means that

1 ∂(n)f

max (hn+1)(σ,y(σ))

Ei ≤ σ∈[a,b] (n + 1)! ∂tn 23

We can say Ei = O(hn+1). Now, since the number of steps from a to b is proportional to 1/h, we multiply the error per step by the number of steps to find a total error

E = O(hn).

In Practice: Examples

We will consider differential equations that we can solve explicitly to obtain an equation for y(t) such that y0(t)= f(t,y). This way, we can calculate the actual error by subtracting the exact value for y(b) from the value that the Higher Order Taylor method predicts for it. To approximate values in the following examples, the derivatives of f(t,y) were computed by hand. MATLAB then performed the iteration and arrived at the approximation.

Notice that the definitions given in the previous section could also have been adapted for varying step size h. However, for ease of computation we have kept the step size constant. In our computations, we have chosen step size of (b − a)/2k, which resulted in K =2k evenly spaced points in the interval.

Example 3.1 We consider the differential equation

1+ t

y0(t)= f(t,y)=

1+ y

with initial condition y(1) = 2. It is clear that y(t)= √t2 +2t +6 − 1 solves this equation.

Thus we calculate the error for y(2) by subtracting the approximation of y(2) from y(2), which is the exact value. Recall that we are using h =2−k because (b − a)=1. The following table displays the errors calculated. k = 1

k = 2

k = 3

k = 4

order = 1

.0333

.0158

.0077

.0038

order = 2

−.0038

−.0009

−.0002

−.0001

order = 3

.0003269

.0000383

.0000046

.0000006

Runge-Kutta Methods

The Taylor methods in the preceding section have the desirable feature that the F.G.E. is of order O(hN ), and N can be chosen large so that this error is small. However, the shortcomings of the Taylor methods are the a priori determination of N and the computation of the higher derivatives, which can be very complicated. Each Runge-Kutta method is derived from an appropriate Taylor method in such a way that the F.G.E. is of order O(hN ). A trade-off is made to perform several function evaluations at each step and eliminate the necessity to compute the higher derivatives. These methods can be constructed for any order N. The Runge-Kutta method of order N = 4 is most popular.

It is a good choice for common purposes because it is quite accurate, stable, and easy to program. Most authorities proclaim that it is not necessary to go to a higher-order method because the increased accuracy is offset by additional computational effort. If more accuracy is required, then either a smaller step size or an adaptive method should be used.

The fourth-order Runge-Kutta method (RK4) simulates the accuracy of the Taylor series method of order N = 4. The method is based on computing yk+1 as follows:

(1) yk+1 = yk + w1k1 + w2k2 + w3k3 + w4k4,

where k1, k2, k3, and k4 have the form

(2)

k1 = h f (tk , yk ),

k2 = h f (tk + a1h, yk + b1k1),

k3 = h f (tk + a2h, yk + b2k1 + b3k2),

k4 = h f (tk + a3h, yk + b4k1 + b5k2 + b6k3).

By matching coefficients with those of the Taylor series method of order N = 4 so that the local truncation error is of order O(h5), Runge and Kutta were able to obtain the 490 CHAP. 9 SOLUTION OF DIFFERENTIAL EQUATIONS

following system of equations:

(3)

b1 = a1,

b2 + b3 = a2,

b4 + b5 + b6 = a3,

w1 + w2 + w3 + w4 = 1,

w2a1 + w3a2 + w4a3 = 1

2,

w2a2

1

+ w3a2

2

+ w4a2

3

= 1

3

,

w2a3

1

+ w3a3

2

+ w4a3

3

= 1

4

,

w3a1b3 + w4(a1b5 + a2b6) = 1

6

,

w3a1a2b3 + w4a3(a1b5 + a2b6) = 1

8

,

w3a2

1b3 + w4(a2

1b5 + a2

2b6) = 1

12

, w4a1b3b6 = 1

24

The system involves 11 equations in 13 unknowns. Two additional conditions must be

supplied to solve the system. The most useful choice is

(4) a1 = 1

2 and b2 = 0.

Then the solution for the remaining variables is

(5)

a2 = 1

2

, a3 = 1, b1 = 1

2

, b3 = 1

2

, b4 = 0, b5 = 0, b6 = 1,

w1 = 1

6

, w2 = 1

3

, w3 = 1

3

, w4 = 1

6

The values in (4) and (5) are substituted into (2) and (1) to obtain the formula for the standard Runge-Kutta method of order N = 4, which is stated as follows. Start with the initial point (t0, y0) and generate the sequence of approximations using

(6) yk+1 = yk + h( f1 + 2 f2 + 2 f3 + f4)

6

,

SEC. 9.5 RUNGE-KUTTA METHODS 491

where

(7)

f1 = f (tk , yk ),

f2 = f

tk + h

2

, yk + h

2

f1

,

f3 = f

tk + h

2

, yk + h

2

f2

,

f4 = f (tk + h, yk + h f3).

Discussion about the Method

The complete development of the equations in (7) is beyond the scope of this book and can be found in advanced texts, but we can get some insights. Consider the graph of the solution curve y = y(t) over the first subinterval [t0, t1]. The function values in (7) are approximations for slopes to this curve. Here f1 is the slope at the left, f2 and f3 are two estimates for the slope in the middle, and f4 is the slope at the right (a)). The next point (t1, y1) is obtained by integrating the slope function

(8) y(t1) − y(t0) =

_ t1

t0

f (t, y(t)) dt.

If Simpson's rule is applied with step size h/2, the approximation to the integral

in (8) is

(9)

_ t1

t0

f (t, y(t)) dt ≈ h

6

( f (t0, y(t0)) + 4 f (t1/2, y(t1/2)) + f (t1, y(t1))),

where t1/2 is the midpoint of the interval. Three function values are needed; hence we

make the obvious choice f (t0, y (t0)) = f1 and f (t1, y(t1)) ≈ f4. For the value in the

middle we chose the average of f2 and f3:

f (t1/2, y(t1/2)) ≈ f2 + f3

2

.

These values are substituted into (9), which is used in equation (8) to get y1:

(10) y1 = y0 + h

6

f1 + 4( f2 + f3)

2

+ f4

. When this formula is simplified, it is seen to be equation (6) with k = 0. The graph for the integral in (9) is shown in Figure 9.9(b).

492 CHAP. 9 SOLUTION OF DIFFERENTIAL EQUATIONS

y

t

m1 = f1

m2 = f3

m3 = f4

m4 = f4

(t0, y0)

y = y(t) (t1, y(t1))

t0 t1/2 t1

(a) Predicted slopes mj to the

solution curve y = y(t)

z

t

(t0, f1)

(t1/2, f2)

(t1/2, f3)

(t1, f4)

t0 t1/2 t1

(b) Integral approximation:

h

6

y(t1) − y0 = ( f1 + 2f2 + 2f3 + f4)

Figure 9.9 The graphs y = y(t) and z = f (t, y(t)) in the discussion of the Runge-Kutta

method of order N = 4.

Step Size versus Error

The error term for Simpson's rule with step size h/2 is

(11) −y(4)(c1)

h5

2880

.

If the only error at each step is that given in (11), after M steps the accumulated error for the RK4 method would be

(12) −

_M

k=1

y(4)(ck)

h5

2880

≈ b − a

5760

y(4)(c)h4 ≈ O(h4).

The next theorem states the relationship between F.G.E. and step size. It is used to give us an idea of how much computing effort must be done when using the RK4 method.

Theorem 9.7 (Precision of the Runge-Kutta Method). Assume that y(t) is the solution to the I.V.P. If y(t) ∈ C5[t0, b] and {(tk , yk)}M

k=0 is the sequence of approximations

generated by the Runge-Kutta method of order 4, then

(13)

|ek| = |y(tk ) − yk| = O(h4),

|_k+1| = |y(tk+1) − yk − hTN (tk , yk)| = O(h5).

SEC. 9.5 RUNGE-KUTTA METHODS 493

In particular, the F.G.E. at the end of the interval will satisfy

(14) E(y(b), h) = |y(b) − yM| = O(h4).

Examples 9.10 and 9.11 illustrate Theorem 9.7. If approximations are computed using the step sizes h and h/2, we should have

(15) E(y(b), h) ≈ Ch4

for the larger step size, and

(16) E

y(b),

h

2

≈ C

h4

16

= 1

16

Ch4 ≈ 1

16

E(y(b), h).

Hence the idea in Theorem 9.7 is that if the step size in the RK4 method is reduced by a factor of 12

we can expect that the overall F.G.E. will be reduced by a factor of 1.

Example 9.10. Use the RK4 method to solve the I.V.P. y_ = (t − y)/2 on [0, 3] with

y(0) = 1. Compare solutions for h = 1, 12

, 14

, and 18

.

Table 9.8 gives the solution values at selected abscissas. For the step size h = 0.25, a sample calculation is

f1 = 0.0 − 1.0

2

= −0.5,

f2 = 0.125 − (1 + 0.25(0.5)(−0.5))

2

= −0.40625,

f3 = 0.125 − (1 + 0.25(0.5)(−0.40625))

2

= −0.4121094,

f4 = 0.25 − (1 + 0.25(−0.4121094))

2

= −0.3234863,

y1 = 1.0 + 0.25

−0.5 + 2(−0.40625) + 2(−0.4121094) − 0.3234863

6

= 0.8974915. _

Example 9.11. Compare the F.G.E. when the RK4 method is used to solve y_ = (t−y)/2

over [0, 3] with y(0) = 1 using step sizes 1, 12

, 14

, and 18

Table 9.9 gives the F.G.E. for the various step sizes and shows that the error in the approximation to y(3) decreases by about 1

16 when the step size is reduced by a factor

of 1/2.

E(y(3), h) = y(3) − yM = O(h4) ≈ Ch4 where C = −0.000614. _

A comparison of Examples 9.10 and 9.11 and Examples 9.8 and 9.9 shows what is

meant by the statement “The RK4 method simulates the Taylor series method of order

N = 4.” For these examples, the two methods generate identical solution sets {(tk , yk)}

494 CHAP. 9 SOLUTION OF DIFFERENTIAL EQUATIONS

Table 9.8 Comparison of the RK4 Solutions with Different Step Sizes for y_ = (t − y)/2

over [0, 3] with y(0) = 1

yk

tk h = 1 h = 12

h = 14

h = 18

y(tk ) Exact

0 1.0 1.0 1.0 1.0 1.0

0.125 0.9432392 0.9432392

0.25 0.8974915 0.8974908 0.8974917

0.375 0.8620874 0.8620874

0.50 0.8364258 0.8364037 0.8364024 0.8364023

0.75 0.8118696 0.8118679 0.8118678

1.00 0.8203125 0.8196285 0.8195940 0.8195921 0.8195920

1.50 0.9171423 0.9171021 0.9170998 0.9170997

2.00 1.1045125 1.1036826 1.1036408 1.1036385 1.1036383

2.50 1.3595575 1.3595168 1.3595145 1.3595144

3.00 1.6701860 1.6694308 1.6693928 1.6693906 1.6693905

Table 9.9 Relation between Step Size and F.G.E. for the RK4 Solutions to

y_ = (t − y)/2 over [0, 3] with y(0) = 1

Step

size, h

Number of

steps, M

Approximation

to y(3), yM

F.G.E.

Error at t = 3,

y(3) − yM

O(h4) ≈ Ch4

where

C = −0.000614

1 3 1.6701860 −0.0007955 −0.0006140

12

6 1.6694308 −0.0000403 −0.0000384

14

12 1.6693928 −0.0000023 −0.0000024

18

24 1.6693906 −0.0000001 −0.0000001

over the given interval. The advantage of the RK4 method is obvious; no formulas for the higher derivatives need to be computed nor do they have to be in the program.

It is not easy to determine the accuracy to which a Runge-Kutta solution has been computed. We could estimate the size of y(4)(c) and use formula (12). Another way is to repeat the algorithm using a smaller step size and compare results. A third way is to adaptively determine the step size, which is done in Program 9.5. In Section 9.6 we will see how to change the step size for a multistep method.

SEC. 9.5 RUNGE-KUTTA METHODS 495

Runge-Kutta Methods of Order N = 2

The second-order Runge-Kutta method (denoted RK2) simulates the accuracy of the Taylor series method of order 2. Although this method is not as good to use as the RK4 method, its proof is easier to understand and illustrates the principles involved.

To start, we write down the Taylor series formula for y(t + h):

(17) y(t + h) = y(t) + hy_

(t) + 1

2

h2 y__

(t) + CT h3 +· · · ,

where CT is a constant involving the third derivative of y(t) and the other terms in the series involve powers of h j for j > 3.

The derivatives y_

(t) and y__

(t) in equation (17) must be expressed in terms of

f (t, y) and its partial derivatives. Recall that

(18) y_

(t) = f (t, y).

The chain rule for differentiating a function of two variables can be used to differentiate (18) with respect to t, and the result is

y__

(t) = ft (t, y) + fy(t, y)y_

(t).

Using (18), this can be written

(19) y__

(t) = ft (t, y) + fy(t, y) f (t, y).

The derivatives (18) and (19) are substituted in (17) to give the Taylor expression for y(t + h):

y(t + h) = y(t) + h f (t, y) + 1

2

h2 ft (t, y)

+ 1

2

h2 fy(t, y) f (t, y) + CT h3 +· · · .

(20)

Now consider the Runge-Kutta method of order N = 2, which uses a linear combination of two function values to express y(t + h):

(21) y(t + h) = y(t) + Ah f0 + Bhf1,

where

(22)

f0 = f (t, y),

f1 = f (t + Ph, y + Qhf0).

Next the Taylor polynomial approximation for a function of two independent variables is used to expand f (t, y) (see the Exercises). This gives the following representation

for f1:

(23) f1 = f (t, y) + Phft (t, y) + Qhfy(t, y) f (t, y) + CPh2 +· · · ,

496 CHAP. 9 SOLUTION OF DIFFERENTIAL EQUATIONS

where CP involves the second-order partial derivatives of f (t, y). Then (23) is used in (21) to get the RK2 expression for y(t + h):

y(t + h) = y(t) + (A + B)h f (t, y) + BPh2 ft (t, y)

+ BQh2 fy(t, y) f (t, y) + BCPh3 +· · · .

(24)

A comparison of similar terms in equations (20) and (24) will produce the following conclusions:

h f (t, y) = (A + B)h f (t, y) implies that 1 = A + B,

1

2

h2 ft (t, y) = BPh2 ft (t, y) implies that

1

2

= BP,

1

2

h2 fy(t, y) f (t, y) = BQh2 fy(t, y) f (t, y) implies that

1

2

= BQ.

Hence, if we require that A, B, P, and Q satisfy the relations

(25) A + B = 1 BP = 1

2

BQ = 1

2

,

then the RK2 method in (24) will have the same order of accuracy as the Taylor's method in (20).

Since there are only three equations in four unknowns, the system of equations (25) is underdetermined, and we are permitted to choose one of the coefficients. There are several special choices that have been studied in the literature; we mention two of them.

Case (i): Choose A = 12

. This choice leads to B = 12

, P = 1, and Q = 1. If

equation (21) is written with these parameters, the formula is

(26) y(t + h) = y(t) + h

2

( f (t, y) + f (t + h, y + h f (t, y))).

When this scheme is used to generate {(tk , yk)}, the result is Heun's method.

Case (ii): Choose A = 0. This choice leads to B = 1, P = 12

, and Q = 12

. If

equation (21) is written with these parameters, the formula is

(27) y(t + h) = y(t) + h f

t + h

2

, y + h

2

f (t, y)

.

When this scheme is used to generate {(tk , yk)}, it is called the modified Euler-Cauchy method.

Numerical Methods Using Matlab, 4th Edition, 2004

John H. Mathews and Kurtis K. Fink

ISBN: 0-13-065248-2

Prentice-Hall Inc.

Upper Saddle River, New Jersey, USA

http://vig.prenhall.com/

Deriving the Runge-Kutta Method

Deriving the midpoint method

The Taylor method is the gold standard for generating better numerical solutions to first order differential equations. A serious weakness in the Taylor method, however, is the need to compute a large number of partial derivatives and do other symbolic manipulation tasks.

For example, the second order Taylor method for the equation

y·( t) = f(t,y(t))

is

yi+1 = yi + h f(ti ,yi ) + h2

2

“©© ©‘

Ûf

Ût

( ti ,yi ) + f ( ti ,yi ) Ûf

Ûy

( ti ,yi )

”™™ ™'

Higher order formulas get even uglier.

The Midpoint method arises from an attempt to replace the second order Taylor method with a simpler "Euler-like" formula

yi+1 = yi + h f(ti + ¨,yi + ‡)

We can solve for the best values for ¨ and ‡ by applying a first order Taylor expansion to the term f(ti + ¨,yi + ‡):

yi+1 = yi + h

“©© ©‘

f ( ti ,yi ) + Ûf

Ût

( ti ,yi ) ¨ + Ûf

Ûy

( ti ,yi ) ‡ + Û

2f

Ût Ûy

( ti ,yi ) ¨ ‡

”™™ ™'

The choices of ¨ and ‡ that make this look as close as possible to the second order Taylor formula above are

¨ = h2

‡ = h2

f(ti ,yi )

leading to the so-called midpoint rule:

1

yi+1 = yi + h f(ti + h2

,yi + h2

f(ti ,yi))

This formula has a simple interpretation. Essentially what we are doing here is driving an Euler estimate half way across the interval [ti , ti+1] and computing the slope

f(ti + h2

,yi + h2

f(ti ,yi))

at that midpoint. We then rewind back to the point ( ti ,yi ) and drive an Euler estimate all the way across the interval to ti+1 using this new midpoint slope in place of the old Euler slope.

The Runge-Kutta Method

The textbook points out that it is possible to derive similar methods by starting with more complex Euler-like formulas with more free parameters and then trying to match those Euler-like methods to higher order Taylor formulas. The Runge-Kutta method is essentially an attempt to match a more complex Euler-like formula to a fourth order Taylor method.

The problem with this is that the Euler-like formula needed to match all the complexity of the fourth order Taylor method formula is quite complex. The textbook states in exercise 31 at the end of section 5.4 that the formula required is

yi+1 = yi + h6

f(ti ,yi ) + h3

f(ti + ¨1 h,yi + ç1 h f(ti ,yi)) + h3

f (ti + ¨2 h,yi + ç2 h f (ti + ›2 h, yi + ›3 h f ( ti ,yi ))) + h6

f (ti + ¨3 h, yi + ç3 h f (ti + ›4 h, yi + ›5 h f ( ti + ›6 h,yi+ ›7 h f ( ti ,yi))))

It is very messy to do so, but this form can expanded out and matched against the Taylor formula of order four. This allows us to solve for all the unknown coefficients. A somewhat cleaner alternative derivation is based on the following argument. Another way to solve for yi+1 is to compute this integral

!t i+1

t i

y·( t) dt = y(ti+1) - y(ti ) = yi+1 - yi

We can imagine beginning to compute the integral by noting that y·( t) = f(t,y(t))

!t i+1

t i

y·( t) dt = !t i+1

t i

f ( t,y( t)) dt

2

Unfortunately, we can not do the integral on the right hand side exactly, because we don't know what y(t) is. That is, after all, the unknown we are trying to solve for. Even though we can't compute the integral on the right exactly, we can estimate it. For example, applying Simpson's rule to the integral produces the estimate

!t i+1

t i

f ( t,y( t)) dt " h3

“©© ©‘f ( ti ,y( ti))+4 f

“©© ©‘

ti+ti+1

2

,y

“©© ©‘

ti+ti+1

2

”™™ ™'

”™™ ™' + f ( ti+1,y( ti+1))

”™™ ™'

The Runge-Kutta method takes this estimate as a starting point. The thing we need to do to make this estimate work is to find a way to estimate the unknown terms

y((ti + ti+1) /2) and y(ti+1) .

The first step is to rewrite the estimate as

h3

“©© ©‘f ( ti ,y( ti))+2 f

“©© ©‘

ti+ti+1

2

,y

“©© ©‘

ti+ti+1

2

”™™ ™'

”™™ ™' +2 f

“©© ©‘

ti+ti+1

2

,y

“©© ©‘

ti+ti+1

2

”™™ ™'

”™™ ™'+ f ( ti+1,y( ti+1))

”™™ ™'

We write the middle term twice because we are going to develop two different estimates for y((ti + ti+1) /2). The thinking is that the mistakes we make in developing those two interior estimates may partly cancel each other out.

Here is how we will develop our estimates.

1. y(ti ) is just yi .

We estimate the first y((ti + ti+1) /2) by driving the original Euler slope

k1 = f(ti ,yi ) half-way across the interval:

2.

k1 = f(ti ,y(ti))

y

“©© ©‘

ti + ti+1

2

”™™ ™' " yi + h/2 k1

As in the midpoint rule, we compute a second slope at that midpoint we just estimated. We then rewind to the start and drive that slope half-way across the interval again.

3.

k2 = f(ti + h/2,yi + h/2 k1 )

y

“©© ©‘

ti + ti+1

2

”™™ ™' " yi + h/2 k2

We use the second estimated midpoint to compute another slope and then drive that slope all the way across the interval.

4.

3 We use the second estimated midpoint to compute another slope and then drive that slope all the way across the interval.

4. k3 = f(ti + h/2,yi + h/2 k2 )

y(ti+1) = yi + h k3

k4 = f(ti + h,yi + h k3 )

Substituting all of these estimates into the Simpson's rule formula above gives yi+1 - yi = !t i+1

t i

f ( t,y( t)) dt "

h3

“©© ©‘f ( ti ,y( ti))+ 2 f

“©© ©‘

ti+ti+1

2

,y

“©© ©‘

ti+ti+1

2

”™™ ™'

”™™ ™'+2 f

“©© ©‘

ti+ti+1

2

,y

“©© ©‘

ti+ti+1

2

”™™ ™'

”™™ ™'+ f ( ti+1,y( ti+1))

”™™ ™'

or

yi+1 = yi + h3

(k1 + 2 k2 + 2 k3 + k4 )

Summary Of The Method

k1 = f(ti ,yi )

k2 = f(ti + h/2,yi + h/2 k1 )

k3 = f(ti + h/2,yi + h/2 k2 )

k4 = f(ti + h,yi + h k3 )

yi+1 = yi + h3(k1 + 2 k2 + 2 k3 + k4 )

4 Taylor Series Methods: To derive these methods we start with a Taylor Expansion:

y(t+_t) _ y(t) + _ty0(t) +

1

2

_t2y00(t) + ...+

1

r!

y(r)(t)_tr.

Let's say we want to truncate this at the second derivative and base a method on that.

The scheme is, then:

yn+1 = yn + fn_t +

f0

tn

2

_t2.

The Taylor series method can be written as

yn+1 = yn +_tF (tn, yn,_t)

where F = f + 1

2_tf0. If we take the LTE for this scheme, we get (as expected) LTE(t) =

y(tn +_t) − y(tn)

_t

− f(tn, y(tn)) −

1

2

_tf0(tn, y(tn)) = O(_t2).

Of course, we designed this method to give us this order, so it shouldn't be a surprise!

So the LTE is reasonable, but what about the global error? Just as in the Euler Forward case, we can show that the global error is of the same order as the LTE. How do we do this?

We have two facts,

y(tn+1) = y(tn) + _tF (tn, y(tn),_t),

and

yn+1 = yn +_tF (tn, yn,_t)

where F = f + 1

2_tf0. Now we subtract these two

|y(tn+1) − yn+1| = |y(tn) − yn +_t (F(tn, y(tn)) − F(tn, yn)) + _tLTE|

_ |y(tn) − yn|+_t |F(tn, y(tn)) − F(tn, yn)|+_t|LTE| .

Now, if F is Lipschitz continuous, we can say

en+1 _ (1 + _tL)en+_t|LTE|.

Of course, this is the same proof as for Euler's method, except that now we are looking at F, not f, and the LTE is of higher order. We can do this no matter which Taylor series method we use, how many terms we go forward before we truncate.

Advantages And Disadvantages Of The Taylor Series Method:

advantages a) One step, explicit

b) can be high order

c) easy to show that global error is the same order as LTE

disadvantages Needs the explicit form of derivatives of f.

4 Runge-Kutta Methods To avoid the disadvantage of the Taylor series method, we can use Runge-Kutta methods. These are still one step methods, but they depend on estimates of the solution at different points. They are written out so that they don't look messy:

Second Order Runge-Kutta Methods:

k1 = _tf(ti, yi)

k2 = _tf(ti + __t, yi + _k1)

yi+1 = yi + ak1 + bk2

let's see how we can chose the parameters a,b, _, _ so that this method has the highest order LTE possible. Take the Taylor expansions to express the LTE:

k1(t) = _tf(t, y(t))

k2(t) = _tf(t + __t, y + _k1(t)

= _t

_

f(t, y(t) + ft(t, y(t))__t+ fy(t, y(t))_k1(t) + O(_t2)

_

LTE(t) =

y(t+_t) − y(t)

_t

a

_t

f(t, y(t))_t −

b

_t

(ft(t, y(t))__t+ fy(t, y(t)_k1(t)

+ f(t, y(t))_t + O(_t2)

=

y(t+_t) − y(t)

_t

− af(t, y(t)) − bf(t, y(t))− bft(t, y(t))_

− bfy(t, y(t)_f(t, y(t))+ O(_t2)

= y0(t) +

1

2

_ty00(t) − (a + b)f(t, y(t)) − _t(b_ft(t, y(t))+ b_f(t, y(t))fy(t, y(t)) + O(_t2)

= (1− a − b)f + (

1

2

− b_)_tft + (

1

2

− b_)_tfyf + O(_t2)

So we want a = 1− b, _ = _ = 1

2b .

Fourth Order Runge-Kutta Methods:

k1 = _tf(ti, yi) (1.3)

k2 = _tf(ti +

1

2

_t, yi +

1

2

k1) (1.4)

k3 = _tf(ti +

1

2

_t, yi +

1

2

k2) (1.5)

k4 = _tf(ti+_t, yi + k3) (1.6)

yi+1 = yi +

1

6

(k1 + k2 + k3 + k4) (1.7)

The second order method requires 2 evaluations of f at every timestep, the fourth order method requires 4 evaluations of f at every timestep. In general: For an rth order Runge- Kutta method we need S(r) evaluations of f for each timestep, where

S(r) =

8><

>:

r for r _ 4

r + 1 for r = 5 and r = 6

_ r + 2 for r _ 7

5

Practically speaking, people stop at r = 5.

Advantages of Runge-Kutta Methods

1. One step method - global error is of the same order as local error.

2. Don't need to know derivatives of f.

3. Easy for ”Automatic Error Control”.

Automatic Error Control Uniform grid spacing - in this case, time steps - are good for some cases but not always. Sometimes we deal with problems where varying the gridsize makes sense. How do you know when to change the stepsize? If we have an rth order scheme and and r + 1th order scheme, we can take the difference between these two to be the error in the scheme, and make the stepsize smaller if we prefer a smaller error, or larger if we can tolerate a larger error.

For Automatic error control yo are computing a ”useless” (r+1)th order shceme . . .

what a waste! But with Runge Kutta we can take a fifth order method and a fourth order method, using the same ks. only a little extra work at each step.


More from UK Essays