Conjugate Gradient Method

In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is symmetric and positive-definite. The conjugate gradient method is an iterative method, so it can be applied to sparse systems that are too large to be handled by direct methods such as the Cholesky decomposition. Such systems often arise when numerically solving partial differential equations.

The conjugate gradient method can also be used to solve unconstrained optimization problems such as energy minimization. It was developed by Magnus Hestenes and Eduard Stiefel.

The biconjugate gradient method provides a generalization to non-symmetric matrices. Various nonlinear conjugate gradient methods seek minima of nonlinear equations.

Read more about Conjugate Gradient Method:  Description of The Method, The Conjugate Gradient Method As A Direct Method, The Conjugate Gradient Method As An Iterative Method, Convergence Properties of The Conjugate Gradient Method, The Preconditioned Conjugate Gradient Method, The Flexible Preconditioned Conjugate Gradient Method, The Conjugate Gradient Method Vs. The Locally Optimal Steepest Descent Method, Derivation of The Method, Conjugate Gradient On The Normal Equations

Famous quotes containing the word method:

    Steady labor with the hands, which engrosses the attention also, is unquestionably the best method of removing palaver and sentimentality out of one’s style, both of speaking and writing.
    Henry David Thoreau (1817–1862)