Error analysis
Published: Last Edited:
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
INTRODUCTION:
Error analysis is the study and evaluation of uncertainity in measurements(J.R Taylor,1996). The determination of the degree of uncertainty can be difficult and requires additional effort on the part of the measurer. Nevertheless, evaluation of the uncertainty cannot be neglected because a measurement of totally unknown reliability is worthless.
Precision and Accuracy
Two terms are commonly associated with any discussion of error: "precision" and "accuracy".
Precision refers to the reproducibility of a measurement while accuracy is a measure of the
closeness to true value. The concepts of precision and accuracy are demonstrated by the series
of targets below. If the center of the target is the "true value", then A is neither precise nor
accurate. Target B is very precise (reproducible) but not accurate. The average of target C's
marks give an accurate result but precision is poor. Target D demonstrates both precision and
accuracy  which is the goal in lab.
A
All experiments, no matter how meticulously planned and executed, have some degree of error
or uncertainty. In general chemistry lab, you should learn how to identify, correct, or evaluate
sources of error in an experiment and how to express the accuracy and precision of
measurements when collecting data or reporting results.
Types of Errors:
Three general types of errors occur in lab measurements: random error, systematic error, and gross errors.
Random (or intermediate) errors: Random errorsareerrorsinmeasurementthat lead to measured values being inconsistent when repeated measures of aconstantattribute orquantityare taken. The wordrandomindicates that they are naturally unpredictable, and have nullexpected value, namely, they are scattered about the true value, and tend to have nullarithmetic meanwhen a measurement is repeated a number of times with the same instrument. All measurements are prone to intermediate error.
Random error is caused by volatile fluctuations in the readings of a measurement apparatus, or in the experimenter's understanding of the instrumental reading; these fluctuations may be in part due to interference of the environment with the measurement process.
The concept of random error is closely related to the concept ofprecision. The higher the precision of a measurement instrument, the smaller the variability (standard deviation) of the fluctuations in its readings.
Systematic errors:
Systematic errorsarebiasesinmeasurementwhich direct to the situation where themeanof many separate measurements differs significantly from the actual value of the measured attribute. All measurements are horizontal to systematic errors, often of several different types. Sources of systematic error (zero error) may be faulty calibration of measurement instruments, changes in theenvironmentwhich acts like an obstacle with the measurement process and sometimes imperfect methods ofobservationcan be either zero error or percentage error. For example, an experimenter taking a reading the time period of a pendulum swinging past a fiducial mark: If his stopwatch or timer starts with 1 second on the clock then all of his results will be off by 1 second (zero error) If the experimenter takes 20 readings but begins counting and says "1" just as he starts the timer, then when he takes an average of his results, there will be a percentage error because the final result will be slightly larger than the correct time.Distancemeasured byradarwill be systematically overestimated if the slight slowing down of the waves in air is not accounted for. Incorrect zeroing of an instrument leading to a zero error is an example of systematic error in instrumentation. So is aclockrunning fast or slow making this a zero error
Systematic errors may also be present in the result of anestimatebased on a mathematical model or physical law. For instance, the estimated oscillation frequency of apendulumwill be systematically in error if slight movement of the support is not accounted for.
Systematic errors can be either constant, or be related (e.g. proportional or a percentage) to the actual value of the measured quantity, or even to the value of a different quantity (the reading of arulercan be affected by environment temperature). When they are constant, they are simply due to incorrect zeroing of the instrument. When they are not constant, they can change sign. For instance, if a thermometer is affected by a proportional systematic error equal to 2% of the actual temperature, and the actual temperature is 100°, 0°, or −100°, the measured temperature will be 102° (systematic error = +2°), 0° (null systematic error) or −102° (systematic error = −2°), respectively. Thus, the temperature will be overestimated when it will be above zero, and underestimated when it will be below zero.
Constant systematic errors are very difficult to deal with, because their effects are only observable if they can be removed. Such errors cannot be removed by repeating measurements or averaging large numbers of results. A common method to remove systematic error is throughcalibrationof the measurement instrument.
In astatisticalcontext, the termsystematic errorusually arises where the sizes and directions of possible errors are unknown.
Drift
Systematic errors which change during an experiment (drift) are easier to detect. Measurements show trends with time rather than varying randomly about amean.
Drift is evident if a measurement of a constant quantity is repeated several times and the measurements drift one way during the experiment, for example if each measurement is higher than the previous measurement which could perhaps occur if an instrument becomes warmer during the experiment. If the measured quantity is variable, it is possible to detect a drift by checking the zero reading during the experiment as well as at the start of the experiment (indeed, the zero reading is a measurement of a constant quantity). If the zero reading is consistently above or below zero, a systematic error is present. If this cannot be eliminated, for instance by resetting the instrument immediately before the experiment, it needs to be allowed for by subtracting its (possibly timevarying) value from the readings, and by taking it into account in assessing the accuracy of the measurement.
If no pattern in a series of repeated measurements is evident, the presence of fixed systematic errors can only be found if the measurements are checked, either by measuring a known quantity or by comparing the readings with readings made using a different apparatus, known to be more accurate. For example, suppose the timing of a pendulum using an accuratestopwatchseveral times gives readings randomly distributed about the mean. A systematic error is present if the stopwatch is checked against the 'speaking clock' of the telephone system and found to be running slow or fast. Clearly, the pendulum timings need to be corrected according to how fast or slow the stopwatch was found to be running. Measuring instruments such asammetersandvoltmetersneed to be checked periodically against known standards.
Systematic errors can also be detected by measuring already known quantities. For example, aspectrometerfitted with adiffraction gratingmay be checked by using it to measure thewavelengthof the Dlines of thesodiumelectromagnetic spectrumwhich are at 589.0 and 589.6 nm. The measurements may be used to determine the number of lines permillimetreof the diffraction grating, which can then be used to measure the wavelength of any other spectral line.
Types of these errors include:
A) Instrumental uncertainties:Each instrument has an inherent amount ofuncertaintyin its measurement. Even the most precise measuring device cannot give theactualvalue because to do so would require an infinitely precise instrument. A measure of the accuracy of an instrument is given by its uncertainty.
b) Method uncertainties: It causes nonideal behavior of substances, slowness of reactions, instability of species, etc.
c) Personal uncertainties : Personal uncertainty causes fear (of getting it wrong)  often leading to the feeling of being pulled in
different directions. This, even with only one value system in operation, results in stress.
D. STANDARD WAYS FOR COMPARING QUANTITIES
1. Deviation.
When a set of measurements is made of a physical quantity, it is useful to express the difference between each measurement and the average (mean) of the entire set. This is called thedeviationof the measurement from the mean. Use the worddeviationwhen an individual measurement of a set is being compared with a quantity which is representative of the entire set. Deviations can be expressed as absolute amounts, or as percents.
2. Difference.
There are situations where we need to compare measurements or results which are assumed to be about equally reliable, that is, to express the absolute or percent difference between the two. For example, you might want to compare two independent determinations of a quantity, or to compare an experimental result with one obtained independently by someone else, or by another procedure. To state the difference between two things implies no judgment about which is more reliable.
3. Experimental discrepancy.
When a measurement or result is compared with another which is assumed or known to be more reliable, we call the difference between the two theexperimentaldiscrepancy. Discrepancies may be expressed as absolute discrepancies or as percent discrepancies. It is customary to calculate the percent by dividing the discrepancy by the more reliable quantity (then, of course, multiplying by 100). However, if the discrepancy is only a few percent, it makes no practical difference which of the two is in the denominator.
E. MEASURES OF ERROR
The experimental error [uncertainty] can be expressed in several standard ways:
1. Limits of error
Error limits may be expressed in the form Q ±DQ where Q is the measured quantity andDQ is the magnitude of its limit of error.[3] This expresses the experimenter's judgment that the "true" value of Q lies between Q DQ and Q +DQ This entire interval within which the measurement lies is called therange of error. Manufacturer's performance guarantees for laboratory instruments are often expressed this way.
2. Average deviation[4]
This measure of error is calculated in this manner: First calculate the mean (average) of a set of successive measurements of a quantity, Q. Then find the magnitude of the deviations of each measurement from the mean. Average these magnitudes of deviations to obtain a number called theaverage deviationof the data set. It is a measure of the dispersion (spread) of the measurements with respect to the mean value of Q, that is, of how far a typical measurement is likely to deviate from the mean.[5] But this is not quite what is needed to express the quality of the mean itself. We want an estimate of how far the mean value of Q is likely to deviate from the "true" value of Q. The appropriate statistical estimate of this is called theaverage deviation of the mean. To find this rigorously would involve us in the theory of probability and statistics. We will state the result without proof.[6]
For a set of n measurements Qiwhose mean value is <Q>, [7] the average deviation of the mean (A.D.M.) is:
n
S
n1
Qi<Qi>
Average deviation of the mean =
(n1)(n1/2)
The vertical bars enclosing an expression mean "take the absolute value" of that expression. That means that if the expression is negative, make it positive.
If the A.D.M. is quoted as the error measure of a mean, <Q>exp, this is equivalent to saying that the probability of <Q>explying within one A.D.M. of the "true" value of Q, Qtrue, is 58%, and the odds against it lying outside of one A.D.M. are 1.4 to 1.
As a rough rule of thumb, the probability of <Q>expbeing within three A.D.M. (on either side) of the true value is nearly 100% (actually 98%). This is a useful relation for converting (or comparing) A.D.M. to limits of error.[8]
3. Standard Deviation of the mean.
[This section is included for completeness, and may be skipped or skimmed unless your instructor specifically assigns it.]
Thestandard deviationis a well known, widely used, and statistically wellfounded measure of error. For a set of n measurements Qiwhose mean value is <Q>, the standard deviation of the mean is found from:
(Equation 2)
Standard deviation of the mean =
æ
ç
ç
ç
ç
ç
ç
ç
è
n
S
i=1
(Qi <Qi>)2
n(n1)
ö1/2
÷
÷
÷
÷
÷
÷
÷
ø
The sum is from i = 1 to n.
This form of the equation is not very convenient for calculations. By expanding the summand it may be recast into a form which lends itself to efficient computation with an electronic calculator:
(Equation 3)
Standard deviation of the mean =
æ
ç
ç
ç
ç
ç
ç
ç
è
n
S
i=1
Qi2n<Q>2
n(n1)
ö1/2
÷
÷
÷
÷
÷
÷
÷
ø
[Note that the n<Q>2is a separate term in the numerator, it isnotsummed over.]
The calculation of the standard deviation requires two summations, one a sum of the data values (to obtain <Q>), and one a sum of the squares of the data values. Many electronic calculators allow these two sums to be obtained with only one entry of each data value. This is a good feature to have in a scientific calculator. When n is large, the quantity n(n1) becomes approximately n2, further simplifying the work.
The use of the standard deviation is hardly justified unless the experimenter has taken a large number of repeated measurements ofeachexperimentally determined quantity. This is seldom the case in the freshman laboratory.
It can be shown that when the measurements are distributed according the "normal" ("Gaussian")[11] distribution, average deviations and standard deviations are related by a simple formula:[12]
(Equation 4)
[average deviation] = 0.80 [standard deviation]
This is a useful "rule of thumb" when it is necessary to compare the two measures of error or convert from one to the other.
F. STANDARD METHODS FOR EXPRESSING ERROR
1. Absolute Error.
Uncertainties may be expressed asabsolutemeasures, giving the size of the a quantity's uncertainty in the same units in the quantity itself.
Example.A piece of metal is weighed a number of times, and the average value obtained is: M = 34.6 gm. By analysis of the scatter of the measurements, the uncertainty is determined to be m = 0.07 gm. Thisabsoluteuncertainty may be included with the measurement in this manner: M = 34.6 ± 0.07 gm.
The value 0.07 after the ± sign in this example is the estimated absolute error in the value 3.86.
2. Relative (or Fractional) Error.
Uncertainties may be expressed asrelativemeasures, giving the ratio of the quantity's uncertainty to the quantity itself. In general:
(Equation 5)
absolute error in a measurement
relative error =
size of the measurement
Example.In the previous example, the uncertainty in M = 34.6 gm was m = 0.07 gm. The relative uncertainty is therefore:
(Equation 6)
m
0.07 gm
=
= 0.002, or, if you wish, 0.2%
M
34.6 gm
It is a matter of taste whether one chooses to express relative errors "as is" (as fractions), or as percents. I prefer to work with them as fractions in calculations, avoiding the necessity for continually multiplying by 100. Why do unnecessary work?
But when expressing final results, it is often meaningful to express the relative uncertainty as a percent. That's easily done, just multiply the relative uncertainty by 100. This one is 0.2%.
3. Absolute or relative form; which to use.
Common sense and good judgment must be used in choosing which form to use to represent the errorwhen stating a result. Consider a temperature measurement with a thermometer known to be reliable to ± 0.5 degree Celsius. Would it make sense to say that this causes a 0.5% error in measuring the boiling point of water (100 degrees) but a whopping 10% error in the measurement of cold water at a temperature of 5 degrees? Of course not! [And what if the temperatures were expressed in degrees Kelvin? That would seem to reduce the percent errors to insignificance!] Errors and discrepancies expressed as percents are meaningless for some types of measurements. Sometimes this is due to the nature of the measuring instrument, sometimes to the nature of the measured quantity itself, or the way it is defined.
There are cases where absolute errors are inappropriate and therefore the errors should be expressed in relative form. There are also cases where the reverse is true.
Sometimesbothabsolute and relative error measures are necessary to completely characterize a measuring instrument's error. For example, if a plastic meter stick uniformly expanded, the effect could be expressed as a percent determinate error. If a one half millimeter were worn off the zero end of a stick, and this were not noticed or compensated for, this would best be expressed as an absolute determinate error. Clearly both errors might be present in a particular meter stick. The manufacturer of a voltmeter (or other electrical meter) usually gives its guaranteed limits of error as a constant determinate errorplusa `percent' error.
Both relative and fractional forms of error may appear in the intermediate algebraic steps when deriving error equations. [This is discussed in section H below.] This is merely a computational artifact, and has no bearing on the question of which form is meaningful for communicating the size and nature of the error in data and results.
G. IMPORTANCE OF REPEATED MEASUREMENTS
A single measurement of a quantity is not sufficient to convey any information about the quality of the measurement. You may need to take repeated measurements to find out how consistent the measurements are.
If you have previously made this type of measurement, with the same instrument, and have determined the uncertainty of that particular measuring instrument and process, you may appeal to your experience to estimate the uncertainty. In some cases you may know, from past experience, that the measurement isscale limited, that is, that its uncertainty is smaller than the smallest increment you can read on the instrument scale. Such a measurement will give the same value exactly for repeated measurements of the same quantity. If you know (from direct experience) that the measurement is scale limited, then quote its uncertainty as the smallest increment you can read on the scale.
Students in this course don't need to become experts in the fine details of statistical theory. But they should be constantly aware of the experimental errors and do whatever is necessary to find out how much they affect results. Care should be taken to minimize errors. The sizes of experimental errors in both data and results should be determined, whenever possible, and quantified by expressing them as average deviations. [In some cases commonsense experimental investigation can provide information about errors without the use of involved mathematics.]
The student should realize that the full story about experimental errors has not been given here, but will be revealed in later courses and more advanced laboratory work.
H. PROPAGATION OF DETERMINATE ERRORS
The importance of estimating data errors is due to the fact that data errors propagate through the calculations to produce errors in results.It is the size of a data errors' effect on the results which is most important.Every effort should be made to determine reasonable error estimates for every important experimental result.
We illustrate how errors propagate by first discussing how to find the amount of error in results by considering how data errors propagate through simple mathematical operations. We first consider the case ofdeterminate errors: those that have known sign. In this way we will discover certain useful rules for error propagation, then we'll then be able to modify the rules to apply to other error measures and also to indeterminate errors.
We are here developing the mathematical rules for "finite differences," the algebra of numbers which have relatively small variations imposed upon them. The finite differences are those variations from "true values" caused by experimental errors.
Suppose that an experimental result is calculated from the sum of two data quantities A and B. For this discussion we'll use a and b to represent the errors in A and B respectively. The data quantities are written to explicitly show the errors:
(A + a) and (B + b)
We allow that a and b may be either positive or negative, the signs being "in" the symbols "a" and "b." But we must emphasize that we are here considering the case where the signs of a and b are determinable, and we know what those signs are (positive, or negative).
The result of adding A and B to get R is expressed by the equation: R = A + B. With the errors explicitly included, this is written:
(A + a) + (B + b) = (A + B) + (a + b)
The result with its error, r, explicitly shown, is: (R + r):
(R + r) = (A + B) + (a + b)
The error in R is therefore: r = a + b.
We conclude that the determinate error in the sum of two quantities is just the sum of the errors in those quantities. You can easily work out for yourself the case where the result is calculated from thedifferenceof two quantities. In that case the determinate error in the result will be the difference in the errors. Summarizing:
* Sum rule for determinate errors. When two quantities are added, their determinate errors add.
* Difference rule for determinate errors. When two quantities are subtracted, their determinate errors subtract.
Now let's consider a result obtained by multiplication, R = AB. With errors explicitly included:
(R + r) = (A + a)(B + b) = AB + aB + Ab + ab or: r = aB + Ab + ab
This doesn't look promising for recasting as a simple rule. However, when we express the errors inrelativeform, things look better. If the error a is small relative to A, and b is small relative to B, then (ab) is certainly small relative to AB, as well as small compared to (aB) and (Ab). Therefore we neglect the term (ab) (throw it out), since we are interested only in error estimates to one or two significant figures. Now we express the relative error in R as
r
aB + bA
a
b
=
=
+
R
AB
A
B
This gives us a very simple rule:
* Product rule for determinate errors. When two quantities are multiplied, theirrelativedeterminate errors add.
A similar procedure may be carried out for the quotient of two quantities, R = A/B.
A + a
A
(A + a) B
A (B + b)


r
B + b
B
(B + b) B
B (B + b)
=
=
R
A/B
A/B
(A + a) B  A (B + b)
(a)B  A(b)
a
b
=
@
@

A(B + B)
AB
A
B
The approximation made in the next to last step was to neglect b in the denominator, which is valid if the relative errors are small. So the result is:
Quotient rule for determinate errors. When two quantities are divided, the relative determinate error of the quotient is the relative determinate error of the numerator minus the relative determinate error of the denominator.
A consequence of the product rule is this:
Power rule for determinate errors. When a quantity Q is raised to a power, P, the relative determinate error in the result is P times the relative determinate error in Q. This also holds for negative powers, i.e. the relative determinate error in the square root of Q is one half the relative determinate error in Q.
One illustrative practical use of determinate errors is the case of correcting a result when you discover, after completing lengthy measurements and calculations, that there was a determinate error in one or more of the measurements. Perhaps a scale or meter had been miscalibrated. You discover this, and fine the size and sign of the error in that measuring tool. Rather than repeat all the measurements, you may construct the determinateerror equation and use your knowledge of the miscalibration error to correct the result. As you will see in the following sections, you will usually have to construct the error equation anyway, so why not use it to correct for the discovered error, rather than repeating all the calculations?
I. PROPAGATION OF INDETERMINATE ERRORS
Indeterminate errors have unknown sign. If their distribution is symmetric about the mean, then they are unbiased with respect to sign. Also, if indeterminate errors in different quantities are independent of each other, their signs have a tendency offset each other in computations.[11]
When we are only concerned withlimits of error(or maximum error) we must assume a "worstcase" combination of signs. In the case of subtraction, A  B, the worstcase deviation of the answer occurs when the errors are either +a and b or a and +b. In either case, the maximum error will be (a + b).
In the case of the quotient, A/B, the worstcase deviation of the answer occurs when the errors have opposite sign, either +a and b or a and +b. In either case, the maximum size of the relative error will be (a/A + b/B).
The results for the operations of addition and multiplication are the same as before. In summary,maximum indeterminate errorspropagate according to the following rules:
Addition and subtraction rule for indeterminate errors. The absolute indeterminate errors add.
Product and quotient rule for indeterminate errors. The relative indeterminate errors add.
A consequence of the product rule is this:
Power rule for indeterminate errors. When a quantity Q is raised to a power, P, the relative error in the result is P times the relative error in Q. This also holds for negative powers, i.e. the relative error in the square root of Q is one half the relative error in Q.
These rules applyonlywhen combiningindependenterrors, that is, individual errors which are not dependent on each other in size or sign.
It can be shown (but not here) that these rules also apply sufficiently well to errors expressed as average deviations. The one drawback to this is that the error estimates made this way are still overconservative in that they do not fully account for the tendency of error terms associated with independent errors to offset each other. This, however, would be a minor correction of little importance in our work in this course.
Error propagation rules may be derived for other mathematical operations as needed. For example, the rules for errors in trig functions may be derived by use of trig identities, using the approximations: sin ß = ß and cos ß = 1, valid when ß is small. Rules for exponentials may be derived also.
When mathematical operations are combined, the rules may be successively applied to each operation, and an equation may be algebraically derived[12] which expresses the error in the result in terms of errors in the data. Such an equation can always be cast intostandard formin which each error source appears in only one term. Let x represent the error in x, y the error in y, etc. Then the error r in any result R, calculated by any combination of mathematical operations from data values X, Y, Z, etc. is given by:
r = (cx)x + (cy)y + (cz)z ... etc.
This may always be algebraically rearranged to:
(Equation 7)
r/R = {Cx}(x/X + {Cy}(y/Y) + {Cz}(z/Z) ... etc.
The coefficients (cx) and {Cx} etc. in each term are extremely important because they, along with the sizes of the errors, determine how much each error affects the result.The relative size of the terms of this equation shows us the relative importance of the error sources.It's not the relative size of the errors (x, y, etc), but the relative size of the error terms which tells us their relative importance.
If this error equation was derived from thedeterminateerrorrules, the relative errors in the above might have + or  signs. The coefficients may also have + or  signs, so the terms themselves may have + or  signs. It is therefore possible for terms to offset each other.
If this error equation was derived from theindeterminate errorrules, the error measures appearing in it are inherently positive. The coefficients will turn out to be positive also, so terms cannot offset each other.
It is convenient to know that the indeterminate error equation may be obtained directly from the determinateerror equation by simply choosing the worstcase, i.e., by taking the absolute value of every term. This forces all terms to be positive. This step is only doneafterthe determinateerror equation has been fully derived in standard form.
The error equation in standard form is one of the most useful tools for experimental design and analysis. It should be derived (in algebraic form) even before the experiment is begun, as a guide to experimental strategy. It can show which error sources dominate, and which are negligible, thereby saving time one might spend fussing with unimportant considerations. It can suggest how the effects of error sources might be minimized by appropriate choice of the sizes of variables. It can tell you how good a measuring instrument you need to achieve a desired accuracy in the results.
The student who neglects to derive and use this equation may spend an entire lab period using instruments, strategy, or values insufficient to the requirements of the experiment. And he may end up without the slightest ideawhythe results were not as good as they ought to have been.
A final comment for those who wish to use standard deviations as indeterminate error measures: Since the standard deviation is obtained from the average ofsquared deviations, equation (7) must be modified—each term of the equation (both sides) must be squared:
(Equation 8)
(r/R) = (Cx)2(x/X) + (Cy)2(y/Y) + (Cz)2(z/Z)
This rule is given here without proof.