Arnoldi Iteration - Krylov Subspaces and The Power Iteration

Krylov Subspaces and The Power Iteration

An intuitive method for finding an eigenvalue (specifically the largest eigenvalue) of a given m × m matrix is the power iteration. Starting with an initial random vector b, this method calculates Ab, A2b, A3b,… iteratively storing and normalizing the result into b on every turn. This sequence converges to the eigenvector corresponding to the largest eigenvalue, . However, much potentially useful computation is wasted by using only the final result, . This suggests that instead, we form the so-called Krylov matrix:

The columns of this matrix are not orthogonal, but in principle, we can extract an orthogonal basis, via a method such as Gram–Schmidt orthogonalization. The resulting vectors are a basis of the Krylov subspace, . We may expect the vectors of this basis to give good approximations of the eigenvectors corresponding to the largest eigenvalues, for the same reason that approximates the dominant eigenvector.

Read more about this topic:  Arnoldi Iteration

Famous quotes containing the word power:

    Sometimes, because of its immediacy, television produces a kind of electronic parable. Berlin, for instance, on the day the Wall was opened. Rostropovich was playing his cello by the Wall that no longer cast a shadow, and a million East Berliners were thronging to the West to shop with an allowance given them by West German banks! At that moment the whole world saw how materialism had lost its awesome historic power and become a shopping list.
    John Berger (b. 1926)