# Properties Be Discovered From Maclaurin Series English Language Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

According to Taylor series If we want a good approximation to the function in the region near x = a, we need to find the first, second, third (and so on) derivatives of the function and substitute the value of a. Then we need to multiply those values by corresponding powers of (x âˆ’ a), giving us the Taylor Series expansion of the function f(x) about x = a:

We now take a particular case of Taylor Series, in the region near x = 0. Such a polynomial is called the McLaren Series.

The infinite series expansion for f(x) about x = 0 becomes:

maclaurin

f '(0) is the first derivative evaluated at x = 0, f ''(0) is the second derivative evaluated at x = 0, and so on.

McLaren series are named after the Scottish mathematician McLaren.

McLaren series of a functionÂ Â up to orderÂ Â may be found usingÂ Series [f,Â x, 0,Â n]. TheÂ n term of a Maclaurin series of a functionÂ Â can be computed inÂ mathematicÂ usingÂ Series Coefficient [f,Â x, 0,Â n]

McLaren series are a type ofÂ series expansionÂ in which all terms are nonnegative integer powers of the variable. Other more general types of series include theÂ Laurent seriesÂ and theÂ puiseux series.

Neither the function nor any of its derivatives exist at x = 0, so there is no polynomial McLaren expansion of the natural logarithm function (ln x).

Series for inverse trigonometrical functions can be complicated to find directly since successive differentials become unmanageable quite quickly. The idea of differentiability of functions and their series is useful in finding series of inverse trigonometrical functions.

In the Maclaurin series the strict conditions about differentiability and existence at x = 0.

## Problem:-

To expand a function to nth terms we have to first find the partial derivative of each order up to n terms, individually then add them up and due to this it become lengthy & impractical due it can't be to expand a series to nth term derivation. To remove this Taylor gives an equation.

Our aim is to find a polynomial that gives us a good approximation to some function. We find the desired polynomial approximation using the Taylor Series.

Polynomial Approximations. Assume that we have a function f for which we can easily compute its value f(a) at some point a, but we do not know how to find f(x) at other points x close to a. For instance, we know that sin 0 = 0, but what is sin 0.1? One way to deal with the problem is to find an approximate value of f(x). If we look at the graph of f(x) and its tangent line at (a, f(a)), we see that the points of the tangent line are close to the graph, so the y-coordinates of those points are possible approximations for f(x).

In Taylor series polynomials, power series and in the Motivation section i.e. even though we didn't know what a series was at that time we gave "infinite degree" Taylor polynomials. In this section we hope to make sense out of those"infinite degree" polynomials.

We begin by assuming that we have a function f(x) that has is equal to a power series expansion, say around x = a:

âˆžX

f(x) = âˆ‘ an(x âˆ’ a)n = a0 + a1(x âˆ’ a) + a2(x âˆ’ a)2 + a3(x âˆ’ a)3 + ãƒ» ãƒ» ãƒ»

n=0

The example of such a function and power series that you might use as a model is the function and series

1/1 âˆ’ x= 1 + x + x2 + x3 +............

This result is precisely what we predicted in our Motivation section. We computed the Taylor polynomial and wondered whether we could replace the upper limit on the Taylor

series summation by infinity. However, at the moment we only know that this is true when we know that the function has a power series expansion like equation

In order to find such a series, some conditions have to be in place:

The function f(x) has to be infinitely differentiable (that is, we can find each of the first derivatives, second derivative, third derivative, and so on forever).

The function f(x) has to be defined in a region near the value x = a.

## DETAILED DESCRIPTION:-

A series is the sum of the terms of a sequence. Finite sequences and series have defined first and last terms, whereas infinite sequences and series continue indefinitely.

In mathematics, given an infinite sequence of numbers {Â anÂ }, a series is informally the result of adding all those terms together: a1Â +Â a2Â +Â a3Â +Â Â·Â Â·Â Â·. These can be written more compactly using the summation symbol âˆ‘. An example is the famous series from Zeno's dichotomy

\sum_{n=1}^\infty \frac{1}{2^n} = \frac{1}{2}+ \frac{1}{4}+ \frac{1}{8}+\cdots+ \frac{1}{2^n}+\cdots.

The terms of the series are often produced according to a certain rule, such as by a formula, or by an algorithm. As there are an infinite number of terms, this notion is often called an infinite series. Unlike finite summations, infinite series need tools from mathematical analysis to be fully understood and manipulated. In addition to their ubiquity in mathematics, infinite series are also widely used in other quantitative disciplines such as physics and computer science.

## Basic properties

Series can be composed of terms from any one of many different sets including real numbers, complex numbers, and functions. The definition given here will be for real numbers, but can be generalized.

Given an infinite sequence of real numbers {anÂ }, define

S_N =\sum_{n=0}^N a_n=a_0+a_1+a_2+\cdots+a_N.

Series are classed not only by whether they converge or diverge: they can also be split up based on the properties of the terms an which is absolute or conditional convergence type of convergence of the series i.e. point wise, uniform the class of the term an i.e. whether it is a real number, arithmetic progression, trigonometric function etc.

## Development of infinite series

Greek mathematician Archimedes produced the first known summation of an infinite series with a method that is still used in the area of calculus today. He used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave a remarkably accurate approximation of Ï€.

The idea of an infinite series expansion of a function was conceived in India by Madhava in the 14th century, who also developed precursors to the modern concepts of the power series, the Taylor series, the McLaren series, rational approximations of infinite series, and infinite continued fractions. He discovered a number of infinite series, including the Taylor series of the trigonometric functions of sine, cosine, tangent and arctangent, the Taylor series approximations of the sine and cosine functions, and the power series of the radius, diameter, circumference, angle Î¸, Ï€ and Ï€/4. His students and followers in the Kerala School further expanded his works with various other series expansions and approximations, until the 16th century

In the 17th century, James Gregory worked in the new decimal system on infinite series and published several McLaren series. In 1715, a general method for constructing the Taylor series for all functions for which they exist was provided by Brook Taylor. Leonhard Euler in the 18th century developed the theory of hyper geometric series and q-series.

Â States that any function satisfying certain conditions can be expressed as a Taylor series.

The Taylor for one or more general series of a functionÂ f(x)Â about a pointÂ aÂ up to orderÂ nÂ may be found usingÂ Series [f,Â {x,Â a,Â n}]. TheÂ nth term of a Taylor series of a functionÂ fÂ can be computed inÂ MathematicÂ usingÂ Series Coefficient [f,Â {x,Â a,Â n}] and is given by the inverseÂ Z-transform

a_n=Z^(-1)[1/(z-a)](n).

Taylor series of some common functions include

1/(1-x)

## =

1/(1-a)+(x-a)/((1-a)^2)+((x-a)^2)/((1-a)^3)+...

(3)

cosx

## =

cosa-sina(x-a)-1/2cosa(x-a)^2+1/6sina(x-a)^3+...

(4)

e^x

## =

e^a[1+(x-a)+1/2(x-a)^2+1/6(x-a)^3+...]

(5)

lnx

## =

lna+(x-a)/a-((x-a)^2)/(2a^2)+((x-a)^3)/(3a^3)-...

(6)

sinx

## =

sina+cosa(x-a)-1/2sina(x-a)^2-1/6cosa(x-a)^3+...

(7)

tanx

## =

tana+sec^2a(x-a)+sec^2atana(x-a)^2+sec^2a(sec^2a-2/3)(x-a)^3+....

(8)

To derive the Taylor series of a functionf(x), note that the integral of theÂ (n+1)stateÂ derivativeÂ f^((n+1))Â ofÂ f(x)Â from the pointÂ x_0Â to an arbitrary pointÂ xÂ is given by

int_(x_0)^xf^((n+1))(x)dx=[f^((n))(x)]_(x_0)^x=f^((n))(x)-f^((n))(x_0),

(9)

Where the nth derivative of isÂ Â f^((n))(x_0) evaluated atx_0, and is therefore simply a constant. Now integrate a second time to obtain

int_(x_0)^x[int_(x_0)^xf^((n+1))(x)dx]dx =int_(x_0)^x[f^((n))(x)-f^((n))(x_0)]dx =[f^((n-1))(x)]_(x_0)^x-(x-x_0)f^((n))(x_0) =f^((n-1))(x)-f^((n-1))(x_0)-(x-x_0)f^((n))(x_0),

(10)

WhereÂ f^((k))(x_0)Â is again a constant? Integrating a third time,

int_(x_0)^xint_(x_0)^xint_(x_0)^xf^((n+1))(x)(dx)^3=f^((n-2))(x)-f^((n-2))(x_0) -(x-x_0)f^((n-1))(x_0)-((x-x_0)^2)/(2!)f^((n))(x_0),

(11)

and continuing up toÂ n+1Â integrations then gives

int...int_(x_0)^x_()_(n+1)f^((n+1))(x)(dx)^(n+1)=f(x)-f(x_0)-(x-x_0)f^'(x_0) -((x-x_0)^2)/(2!)f^('')(x_0)-...-((x-x_0)^n)/(n!)f^((n))(x_0).

(12)

Rearranging then gives the one-dimensional Taylor series

f(x)

## =

f(x_0)+(x-x_0)f^'(x_0)+((x-x_0)^2)/(2!)f^('')(x_0)+...+((x-x_0)^n)/(n!)f^((n))(x_0)+R_n

(13)

http://mathworld.wolfram.com/images/equations/TaylorSeries/Inline46.gif

## =

sum_(k=0)^(n)((x-x_0)^kf^((k))(x_0))/(k!)+R_n.

(14)

Here,Â R_nÂ is a remainder term known as theÂ Lagrange remainder, which is given by

R_n=int...int_(x_0)^x_()_(n+1)f^((n+1))(x)(dx)^(n+1).

(15)

Rewriting theÂ repeated integralÂ then gives

R_n=int_(x_0)^xf^((n+1))(t)((x-t)^n)/(n!)dt.

(16)

Now, from theÂ mean-value theoremÂ for a functiong(x), it must be true that

int_(x_0)^xg(x)dx=(x-x_0)g(x^*)

(17)

for someÂ x^* in [x_0,x]. Therefore, integratingÂ n+1Â times gives the result

R_n=((x-x_0)^(n+1))/((n+1)!)f^((n+1))(x^*)

In the last section, we learned about Taylor Series, where we found an approximating polynomial for a particular function in the region near some value x = a.

We now take a particular case of Taylor Series, in the region near x = 0. Such a polynomial is called the Maclaurin Series.

The infinite series expansion for f(x) about x = 0 becomes:

maclaurin

f '(0) is the first derivative evaluated at x = 0, f ''(0) is the second derivative evaluated at x = 0, and so on.

Find the McLaren Series expansion for f(x) = sin x.

We plot our answer

maclaurin

to see if the polynomial is a good approximation to f(x) = sin x.

maclaurin

We observe that our polynomial (in red) is a good approximation to f(x) = sin x (in blue) near x = 0. In fact, it is quite good between -3 â‰¤ x â‰¤ 3.

Find the McLaren Series expansion of f(x)

Find the Maclaurin Series expansion of cos x.

## Finding the Ï€ using infinite tune

In the 17th century, Leibniz used the series expansion of arc tan x to find an approximation of Ï€.

We start with the first derivative:

d/dx arctan x

The value of this derivative when x = 0 is 1.

Similarly for the subsequent derivatives:

2nd deriv

f ''(0) = 0

2nd deriv

f '''(0) = -2

2nd deriv

f iv(0) = 0

2nd deriv

f v(0) = 24

Now we can substitute into the Maclaurin Series formula:

arctan

2nd deriv

we can substitute x = 1 into the above expression and get the following expansion for Ï€

pi

All very well, but it was not a good way to find the value of Ï€ because this expansion converges very slowly.

Even after adding 1000 terms, we don't have 3 decimal place accuracy.

pi

We know now that Ï€ = 3.141 592 653 5...

## Can properties of a function be discovered from its McLaren series?

Yes. If the McLaren expansion of a function locally converges to the function, then you know the function is smooth. In addition, if the residual of the McLaren expansion converges to 0, the function is analytic .And also all of the derivatives at 0 are given. So its value at 0, slope at 0, concavity at 0 (if coefficient of x^2 is not 0).

No if f(x)=e^(-1/x^2), for x not 0, and f(0)=0. This is a well-known example of a function with a Malaren series, but the resulting series does NOT represent f(x)! The series has all 0 coefficients and can give no other properties than the derivatives mentioned under "Yes".

So, to find the value of a function using its Maclaurin Series to a given accuracy, one only needs to use the number of terms that give the appropriate accuracy required. Consider the following example.

## Applications of Taylor Series:-

We started studying Taylor Series because we said that polynomial functions are easy and that if we could find a way of representing complicated functions as series ("infinite polynomials") then maybe some properties of functions would be easy to study too. In this section, we'll show you a few ways in Taylor series can make life easy.

Evaluating definite integrals

Remember that we've said that some functions have no anti derivative which can be expressed in terms of familiar functions. This makes evaluating definite integrals of these functions difficult because the Fundamental Theorem of Calculus cannot be used. However, if we have a series representation of a function, we can oftentimes use that to evaluate a definite integral.

Here is an example. Suppose we want to evaluate the definite integral

\[ \int_0^1 \sin(x^2)~dx \]

The integrand has no anti derivative expressible in terms of familiar functions. However, we know how to find its Taylor series: we know that

\[ \sin t = t - \frac{t^3}{3!} + \frac{t^5}{5!} - \frac{t^7}{7!} + \ldots \]

Now if we substituteÂ $ t = x^2 $ Â , we have

\[ \sin(x^2) = x^2 - \frac{x^6}{3!} + \frac{x^{10}}{10!} - \frac{x^{14}}{14!} + \ldots \]

In spite of the fact that we cannot ant differentiate the function, we can anti differentiate the Taylor series:

\begin{eqnarray*} \int_0^1 \sin(x^2)~dx & = & \int_0^1 (x^2 - \frac{x^6}{3!} + \frac{x^{10}}{5!} - \frac{x^{14}}{7!} + \ldots)~dx \\ \\ & = & (\frac{x^3}{3} - \frac{x^7}{7\cdot 3!} + \frac{x^{11}}{11\cdot 5!} - \frac{x^{15}}{15\cdot 7!} +\ldots)|_0^1 \\ \\ & = & \frac 13 - \frac{1}{7\cdot 3!} + \frac{1}{11\cdot 5!} - \frac{1}{15\cdot 7!} + \ldots \end{eqnarray*}

Notice that this is an alternating series so we know that it converges. If we add up the first four terms, the pattern becomes clear: the series converges toÂ 0.31026.

Understanding asymptotic behaviour

Sometimes, a Taylor series can tell us useful information about how a function behaves in an important part of its domain. Here is an example which will demonstrate.

A famous fact from electricity and magnetism says that a chargeÂ qÂ generates an electric field whose strength is inversely proportional to the square of the distance from the charge. That is, at a distanceÂ rÂ away from the charge, the electric field is

\[ E = \frac{kq}{r^2} \]

whereÂ kÂ is some constant of proportionality.

Oftentimes an electric charge is accompanied by an equal and opposite charge nearby. Such an object is called an electric dipole. To describe this, we will put a chargeÂ qÂ at the pointÂ $ x = d $ Â and a chargeÂ -qÂ atÂ $ x = -d $ Â .

Along theÂ xÂ axis, the strength of the electric fields is the sum of the electric fields from each of the two charges. In particular,

\[ E = \frac{kq}{(x-d)^2} - \frac{kq}{(x+d)^2} \]

If we are interested in the electric field far away from the dipole, we can consider what happens for values ofÂ xÂ much larger thanÂ d.Â We will use a Taylor series to study the behaviour in this region.

\[ E = \frac{kq}{(x-d)^2} - \frac{kq}{(x+d)^2} = \frac{kq}{x^2(1-\frac dx)^2} - \frac{kq}{x^2(1+\frac dx)^2} \]

Remember that the geometric series has the form

\[ \frac 1{1-u} = 1 + u + u^2 + u^3 + u^4 + \ldots \]

If we differentiate this series, we obtain

\[ \frac{1}{(1-u)^2} = 1 + 2u + 3u^2 + 4u^3 + \ldots \]

Into this expression, we can substituteÂ $ u = \frac dx $ Â to obtain

\[ \frac{1}{(1-\frac dx)^2} = 1+\frac{2d}{x} + \frac{3d^2}{x^2} + \frac{4d^3}{x^3} + \ldots \]

In the same way, if we substituteÂ $ u = -\frac dx $ Â , we have

\[ \frac{1}{(1+\frac dx)^2} = 1-\frac{2d}{x} + \frac{3d^2}{x^2} - \frac{4d^3}{x^3} + \ldots \]

Now putting this together gives

\begin{eqnarray*} E & = & \frac{kq}{x^2(1-\frac dx)^2} - \frac{kq}{x^2(1+\frac dx)^2} \\ \\ & = & \frac{kq}{x^2}\big[ (1+\frac{2d}{x} + \frac{3d^2}{x^2} + \frac{4d^3}{x^3} + \ldots) -(1-\frac{2d}{x} + \frac{3d^2}{x^2} - \frac{4d^3}{x^3} + \ldots) \big] \\ \\ & = & \frac{kq}{x^2} \big[ \frac{4d}{x} + \frac{8d^3}{x^3} + \ldots \big] \\ \\ & \approx & \frac{4dq}{x^3} \end{eqnarray*}

In other words, far away from the dipole whereÂ xÂ is very large, we see that the electric field strength is proportional to the inverseÂ cubeÂ of the distance. The two charges partially cancel one another out to produce a weaker electric field at a distance.

Understanding the growth of functions

This example is similar is spirit to the previous one. Several times in this course, we have used the fact that exponentials grow much more rapidly than polynomials. We recorded this by saying that

\[ \lim_{n\to\infty} \frac{e^x}{x^n} = \infty \]

for any exponentÂ nÂ . Let's think about this for a minute because it is an important property of exponentials. The ratioÂ $ \frac{e^x}{x^n} $ Â is measuring how large the exponential is compared to the polynomial. If this ratio was very small, we would conclude that the polynomial is larger than the exponential. But if the ratio is large, we would conclude that the exponential is much larger than the polynomial. The fact that this ratio becomes arbitrarily large means that the exponential becomes larger than the polynomial by a factor which is as large as we would like. This is what we mean when we say "an exponential grows faster than a polynomial."

To see why this relationship holds, we can write down the Taylor series forÂ $ e^x $ Â .

\begin{eqnarray*} \frac{e^x}{x^n} & = & \frac{1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \ldots \frac{x^n}{n!} + \frac{x^{n+1}}{(n+1)!} + \ldots}{x^n} \\ & = & \frac{1}{x^n} + \frac{1}{x^{n-1}} + \ldots + \frac{1}{n!} + \frac{x}{(n+1)!} +\ldots \\ & & > \frac{x}{(n+1)!} \end{eqnarray*}

Notice that this last term becomes arbitrarily large asÂ $ x \to \infty $ Â . That implies that the ratio we are interested in does as well:

\[ \lim_{x\to\infty}\frac{e^x}{x^n} = \infty \]

Basically, the exponentialÂ $ e^x $ Â grows faster than any polynomial because it behaves like an infinite polynomial whose coefficients are all positive.

Solving differential equations

Some differential equations cannot be solved in terms of familiar functions (just as some functions do not have ant derivatives which can be expressed in terms of familiar functions). However, Taylor series can come to the rescue again. Here we will present two examples to give you the idea.

Example 1:Â We will solve the initial value problem

\begin{eqnarray*} \frac{dy}{dx} & = & y \\ y(0) & = & 1 \end{eqnarray*}

Of course, we know that the solution isÂ $ y(x) = e^x $ Â , but we will see how to discover this in a different way. First, we will write out the solution in terms of its Taylor series:

\[ y = a_0 + a_1x + a_2x^2 + a_3x^3 + a_4x^4 +\ldots \]

Since this function satisfies the conditionÂ $ y(0) = 1 $ Â , we must haveÂ $ y(0) = a_0 = 1 $ Â .

We also have

\[ \frac{dy}{dx} = a_1 + 2a_2x + 3a_3x^2 + 4a_4x^3 + \ldots \]

Since the differential equation says thatÂ $ \frac{dy}{dx} = y $ Â , we can equate these two Taylor series:

\[ a_0 + a_1x + a_2x^2 + a_3x^3 + a_4x^4 +\ldots = a_1 + 2a_2x + 3a_3x^2 + 4a_4x^3 + \ldots \]

If we now equate the coefficients, we obtain:

\[ \begin{array}{ll} a_0 = a_1 = 1, \hspace{.1in} & a_1 = 1 \\ a_1 = 2a_2, & a_2 = \frac{a_1}{2} = \frac 12 \\ \\ a_2 = 3a_3, & a_3 = \frac{a_2}{3} = \frac 1{2\cdot 3} \\ \\ a_3 = 4a_4, & a_4 = \frac{a_3}{4} = \frac 1{2\cdot 3\cdot 4} \\ \\ a_{n-1} = na_n, & a_n = \frac{a_{n-1}}{n} = \frac{1}{1\cdot 2\cdot 3 \ldots n} = \frac{1}{n!} \end{array} \]

This means thatÂ $ y = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \ldots + \frac{x^n}{n!} + \ldots = e^x $ Â as we expect.

Of course, this is an intial value problem we know how to solve. The real value of this method is in studying initial value problems that we do not know how to solve.

Example 2:Â Here we will studyÂ Airy's equationÂ with initial conditions:

\begin{eqnarray*} y^{\prime\prime} & = & xy \\ y(0) & = & 1 \\ y^\prime(0) & = & 0 \end{eqnarray*}

This equation is important in optics. In fact, it explains why a rainbow appears the way in which it does! As before, we will write the solution as a series:

## Related work:-

## SITES REFERRED:-

## en.wikipedia.org/wiki/Taylor_series

## www.cut-the-knot.org/Generalization/taylor.shtm

## www.mathworks.com/help/toolbox/symbolic/taylor.html

www.phengkimving.com/...series/15_04_app_of_taylor_series.htm

google images

## BOOKS REFERRED:-

B.S GREWAL

B.V RAMANA