Conjugate Gradient Method

In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is symmetric and positive-definite. The conjugate gradient method is an iterative method, so it can be applied to sparse systems that are too large to be handled by direct methods such as the Cholesky decomposition. Such systems often arise when numerically solving partial differential equations.

The conjugate gradient method can also be used to solve unconstrained optimization problems such as energy minimization. It was developed by Magnus Hestenes and Eduard Stiefel.

The biconjugate gradient method provides a generalization to non-symmetric matrices. Various nonlinear conjugate gradient methods seek minima of nonlinear equations.

Read more about Conjugate Gradient Method:  Description of The Method, The Conjugate Gradient Method As A Direct Method, The Conjugate Gradient Method As An Iterative Method, Convergence Properties of The Conjugate Gradient Method, The Preconditioned Conjugate Gradient Method, The Flexible Preconditioned Conjugate Gradient Method, The Conjugate Gradient Method Vs. The Locally Optimal Steepest Descent Method, Derivation of The Method, Conjugate Gradient On The Normal Equations

Famous quotes containing the word method:

    We have not given science too big a place in our education, but we have made a perilous mistake in giving it too great a preponderance in method in every other branch of study.
    Woodrow Wilson (1856–1924)