Hat Matrix - Uncorrelated Errors

Uncorrelated Errors

For uncorrelated errors, the estimated parameters are

so the fitted values are

Therefore the hat matrix is given by

In the language of linear algebra, the hat matrix is the orthogonal projection onto the column space of the design matrix X. (Note that is the pseudoinverse of X.)

The hat matrix corresponding to a linear model is symmetric and idempotent, that is, H2 = H. However, this is not always the case; in locally weighted scatterplot smoothing (LOESS), for example, the hat matrix is in general neither symmetric nor idempotent.

The formula for the vector of residuals r can be expressed compactly using the hat matrix:

The covariance matrix of the residuals is therefore, by error propagation, equal to, where Σ is the covariance matrix of the errors (and by extension, the observations as well). For the case of linear models with independent and identically distributed errors in which Σ = σ2I, this reduces to (IH)σ2.

For linear models, the trace of the hat matrix is equal to the rank of X, which is the number of independent parameters of the linear model. For other models such as LOESS that are still linear in the observations y, the hat matrix can be used to define the effective degrees of freedom of the model.

The hat matrix has a number of useful algebraic properties. Practical applications of the hat matrix in regression analysis include leverage and Cook's distance, which are concerned with identifying observations which have a large effect on the results of a regression.

Read more about this topic:  Hat Matrix

Famous quotes containing the word errors: