Uncertainty Quantification - Sources of Uncertainty

Sources of Uncertainty

Uncertainty can enter mathematical models and experimental measurements in various contexts. One way to categorize the sources of uncertainty is to consider:

  • Parameter uncertainty, which comes from the model parameters that are inputs to the computer model (mathematical model) but whose exact values are unknown to experimentalists and cannot be controlled in physical experiments. Examples are the local free-fall acceleration in a falling object experiment, and various material properties in a finite element analysis for mechanical engineering.
  • Structural uncertainty, aka model inadequacy, model bias, or model discrepancy, which comes from the lack of knowledge of the underlying true physics. It depends on how accurately a mathematical model describes the true system for a real-life situation, considering the fact that models are almost always only approximations to reality. One example is when modeling the process of a falling object using the free-fall model; the model itself is inaccurate since there always exists air friction. In this case, even if there is no unknown parameter in the model, a discrepancy is still expected between the model and true physics.
  • Algorithmic uncertainty, aka numerical uncertainty, which comes from numerical errors and numerical approximations per implementation of the computer model. Most models are too complicated to solve exactly. For example the finite element method or finite difference method may be used to approximate the solution of a partial differential equation, which, however, introduces numerical errors. Other examples are numerical integration and infinite sum truncation that are necessary approximations in numerical implementation.
  • Parametric variability, which comes from the variability of input variables of the model. For example, the dimensions of a work piece in a process of manufacture may not be exactly as designed and instructed, which would cause variability in its performance.
  • Experimental uncertainty, aka observation error, which comes from the variability of experimental measurements. The experimental uncertainty is inevitable and can be noticed by repeating a measurement for many times using exactly the same settings for all inputs/variables.
  • Interpolation uncertainty, which comes a lack of available data collected from computer model simulations and/or experimental measurements. For other input settings that don't have simulation data or experimental measurements, one must interpolate or extrapolate in order to predict the corresponding responses.

Another way of categorization is to classify uncertainty into two categories:

  • Aleatoric uncertainty, aka statistical uncertainty, which is unknowns that differ each time we run the same experiment. For an example of simulating the take-off of an airplane, even if we could exactly control the wind speeds along the run way, if we let 10 planes of the same make start, their trajectories would still differ due to fabrication differences. Similarly, if all we knew is that the average wind speed is the same, letting the same plane start 10 times would still yield different trajectories because we do not know the exact wind speed at every point of the runway, only its average. Aleatoric uncertainties are therefore something an experimenter cannot do anything about: they exist, and they cannot be suppressed by more accurate measurements.
  • Epistemic uncertainty, aka systematic uncertainty, which is due to things we could in principle know but don't in practice. This may be because we have not measured a quantity sufficiently accurately, or because our model neglects certain effects, or because particular data are deliberately hidden.

In real life applications, both kinds of uncertainties are often present. Uncertainty quantification intends to work toward reducing epistemic uncertainties to aleatoric uncertainties. The quantification for the aleatoric uncertainties is relatively straightforward to perform. Techniques such as Monte Carlo method are frequently used. A probability distribution can be represented by its moments (in the Gaussian case, the mean and covariance suffice), or more recently, by techniques such as Karhunen–Loève and polynomial chaos expansions. To evaluate epistemic uncertainties, the efforts are made to gain better knowledge of the system, process or mechanism. Methods such as fuzzy logic or evidence theory (Dempster–Shafer theory – a generalization of the Bayesian theory of subjective probability) are used.

Read more about this topic:  Uncertainty Quantification

Famous quotes containing the words sources of, sources and/or uncertainty:

    My profession brought me in contact with various minds. Earnest, serious discussion on the condition of woman enlivened my business room; failures of banks, no dividends from railroads, defalcations of all kinds, public and private, widows and orphans and unmarried women beggared by the dishonesty, or the mismanagement of men, were fruitful sources of conversation; confidence in man as a protector was evidently losing ground, and women were beginning to see that they must protect themselves.
    Harriot K. Hunt (1805–1875)

    The sources of poetry are in the spirit seeking completeness.
    Muriel Rukeyser (1913–1980)

    What a chimera then is man. What a novelty! What a monster, what a chaos, what a contradiction, what a prodigy. Judge of all things, imbecile worm of the earth; depositary of truth, a sink of uncertainty and error: the pride and refuse of the universe.
    Blaise Pascal (1623–1662)