Optimal Design - Minimizing The Variance of Estimators

Minimizing The Variance of Estimators

Experimental designs are evaluated using statistical criteria.

It is known that the least squares estimator minimizes the variance of mean-unbiased estimators (under the conditions of the Gauss–Markov theorem). In the estimation theory for statistical models with one real parameter, the reciprocal of the variance of an ("efficient") estimator is called the "Fisher information" for that estimator. Because of this reciprocity, minimizing the variance corresponds to maximizing the information.

When the statistical model has several parameters, however, the mean of the parameter-estimator is a vector and its variance is a matrix. The inverse matrix of the variance-matrix is called the "information matrix". Because the variance of the estimator of a parameter vector is a matrix, the problem of "minimizing the variance" is complicated. Using statistical theory, statisticians compress the information-matrix using real-valued summary statistics; being real-valued functions, these "information criteria" can be maximized. The traditional optimality-criteria are invariants of the information matrix; algebraically, the traditional optimality-criteria are functionals of the eigenvalues of the information matrix.

  • A-optimality ("average" or trace)
    • One criterion is A-optimality, which seeks to minimize the trace of the inverse of the information matrix. This criterion results in minimizing the average variance of the estimates of the regression coefficients.
  • C-optimality
    • This criterion minimizes the variance of a best linear unbiased estimator of a a predetermined linear combination of model parameters.
  • D-optimality (determinant)
    • A popular criterion is D-optimality, which seeks to minimize |(X'X)−1|, or equivalently maximize the determinant of the information matrix X'X of the design. This criterion results in maximizing the differential Shannon information content of the parameter estimates.
  • E-optimality (eigenvalue)
    • Another design is E-optimality, which maximizes the minimum eigenvalue of the information matrix. The E-optimality criterion need not be differentiable at every point. Such E-optimal designs can be computed using methods of convex minimization that use subgradients rather than gradients at points of non-differentiability. Any non-differentiability need not be a serious problem, however: E-optimality problems are special cases of semidefinite-programming problems which have effective solution-methods, especially bundle methods and interior-point methods.
  • T-optimality
    • This criterion maximizes the trace of the information matrix.

Other optimality-criteria are concerned with the variance of predictions:

  • G-optimality
    • A popular criterion is G-optimality, which seeks to minimize the maximum entry in the diagonal of the hat matrix X(X'X)−1X'. This has the effect of minimizing the maximum variance of the predicted values.
  • I-optimality (integrated)
    • A second criterion on prediction variance is I-optimality, which seeks to minimize the average prediction variance over the design space.
  • V-optimality (variance)
    • A third criterion on prediction variance is V-optimality, which seeks to minimizes the average prediction variance over a set of m specific points.

Read more about this topic:  Optimal Design

Famous quotes containing the word variance:

    There is an untroubled harmony in everything, a full consonance in nature; only in our illusory freedom do we feel at variance with it.
    Fyodor Tyutchev (1803–1873)