The central limit theorem

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

The Central Limit Theorem

The central limit theorem is the second fundamental theorem in probability after the ‘law of large numbers.' The‘law of large numbers'is atheoremthat describes the result of performing the same experiment a large number of times. According to the law, theaverageof the results obtained after a large number of trials should be close to theexpected value, and will tend to become closer to this value as more trials are carried out.

For example, a single roll of afair diceproduces one of the numbers {1, 2, 3, 4, 5, 6} each with equalprobability. Therefore, the expected value (E(x)), of a single dice roll is (1+2+3+4+5+6) ÷ 6 = 3.5. If this dice is rolled a large number of times, the law of large numbers states average of the result of all these trials known as the sample mean , will be approximately equal to 3.5.

= 1Nk=1Nxk≈Ex=3.5

If the number of trials was to further increase, the average would further approach the expected value. So in general,

as N→∞, →Ex

This is the main premise of the law of large numbers.

The central limit theorem is similar to the law of large numbers in that it involves the behaviour of a distribution as N→∞. The central limit theorem states that given a distribution with a mean (μ) and variance (σ²), the samplingdistribution of the mean approaches anormal distributionwith a mean (μ) and a variance (σ²N) as N, thesample size,increases. In other words, the central limit theorem predicts that regardless of the distribution of the parent population:

  1. Themeanof the population of means isalwaysequal to the mean of the parent population from which the population samples were drawn.
  2. Thestandard deviationof the population of means is always equal to the standard deviation of the parent population divided by the square root of the sample size (N).
  3. Thedistribution of means will increasingly approximate anormal distributionas the size N of samples increases.

→X~N(μ, σ2N)

(This is the main consequence of the theorem.)

The origin of this celebrated theorem is said to have come from Abraham de Moivre, a French born mathematician who used the normal distribution to approximate the distribution of the number of heads resulting from many tosses of a fair coin. This was documented in his book ‘The Doctrine of Chances' published in 1733 which was essentially a handbook for gamblers. This finding was somewhat forgotten until the famous French mathematicianPierre-Simon Laplacerevived it in his monumental work‘Théorie Analytique des Probabilités', which was published in 1812. Laplace was able to expand on de Moivre's findings by approximating the binomial distribution with the normal distribution.

De Moivre Laplace

But as with de Moivre, Laplace's finding received little attention in his own time. It was not until the nineteenth century was at an end that the importance of the central limit theorem was discerned, when, in 1901, Russian mathematicianAleksandr Lyapunovdefined it in general terms and proved precisely how it worked mathematically.A full proof of the central limit theorem will be given later in this document.

One may be familiar with the normal distribution and the famous ‘bell shaped' curve that is associated with it.

This curve is often found when presenting data for something like the heights or weights of people in a large population. Where μ is the mean . When the central limit theorem is applied, the distribution will approach something similar to the graph above.

However, the amazing implication

The central limit theorem explains why many non-normal distributions tend towards the normal distribution as the sample size N increases. This includes uniform, triangular, inverse and even parabolic distributions. The following illustrations show how they tends towards a normal distribution: