Uncertainty - Measurements

Measurements

In metrology, physics, and engineering, the uncertainty or margin of error of a measurement is stated by giving a range of values likely to enclose the true value. This may be denoted by error bars on a graph, or by the following notations:

  • measured value ± uncertainty
  • measured value +uncertainty
    −uncertainty
  • measured value(uncertainty)

The middle notation is used when the error is not symmetrical about the value – for example . This can occur when using a logarithmic scale, for example. The latter "concise notation" is used for example by IUPAC in stating the atomic mass of elements. There, the uncertainty given in parenthesis applies to the least significant figure(s) of the number prior to the parenthesized value (i.e. counting from rightmost digit to left). For instance, 1.00794(7) stands for 1.00794±0.00007, while 1.00794(72) stands for 1.00794±0.00072.

Often, the uncertainty of a measurement is found by repeating the measurement enough times to get a good estimate of the standard deviation of the values. Then, any single value has an uncertainty equal to the standard deviation. However, if the values are averaged, then the mean measurement value has a much smaller uncertainty, equal to the standard error of the mean, which is the standard deviation divided by the square root of the number of measurements.

When the uncertainty represents the standard error of the measurement, then about 68.2% of the time, the true value of the measured quantity falls within the stated uncertainty range. For example, it is likely that for 31.8% of the atomic mass values given on the list of elements by atomic mass, the true value lies outside of the stated range. If the width of the interval is doubled, then probably only 4.6% of the true values lie outside the doubled interval, and if the width is tripled, probably only 0.3% lie outside. These values follow from the properties of the normal distribution, and they apply only if the measurement process produces normally distributed errors. In that case, the quoted standard errors are easily converted to 68.3% ("one sigma"), 95.4% ("two sigma"), or 99.7% ("three sigma") confidence intervals.

In this context, uncertainty depends on both the accuracy and precision of the measurement instrument. The lower the accuracy and precision of an instrument, the larger the measurement uncertainty is. Notice that precision is often determined as the standard deviation of the repeated measures of a given value, namely using the same method described above to assess measurement uncertainty. However, this method is correct only when the instrument is accurate. When it is inaccurate, the uncertainty is larger than the standard deviation of the repeated measures, and it appears evident that the uncertainty does not depend only on instrumental precision.

Read more about this topic:  Uncertainty