Jacobi Eigenvalue Algorithm - Description

Description

Let S be a symmetric matrix, and G = G(i,j,θ) be a Givens rotation matrix. Then:

is symmetric and similar to S.

Furthermore, S′ has entries:

\begin{align} S'_{ii} &= c^2\, S_{ii} - 2\, s c \,S_{ij} + s^2\, S_{jj} \\ S'_{jj} &= s^2 \,S_{ii} + 2 s c\, S_{ij} + c^2 \, S_{jj} \\ S'_{ij} &= S'_{ji} = (c^2 - s^2 ) \, S_{ij} + s c \, (S_{ii} - S_{jj} ) \\ S'_{ik} &= S'_{ki} = c \, S_{ik} - s \, S_{jk} & k \ne i,j \\ S'_{jk} &= S'_{kj} = s \, S_{ik} + c \, S_{jk} & k \ne i,j \\ S'_{kl} &= S_{kl} &k,l \ne i,j
\end{align}

where s = sin(θ) and c = cos(θ).

Since G is orthogonal, S and S′ have the same Frobenius norm ||·||F (the square-root sum of squares of all components), however we can choose θ such that Sij = 0, in which case S′ has a larger sum of squares on the diagonal:

Set this equal to 0, and rearrange:

if

In order to optimize this effect, Sij should be the largest off-diagonal component, called the pivot.

The Jacobi eigenvalue method repeatedly performs rotations until the matrix becomes almost diagonal. Then the elements in the diagonal are approximations of the (real) eigenvalues of S.

Read more about this topic:  Jacobi Eigenvalue Algorithm

Famous quotes containing the word description:

    God damnit, why must all those journalists be such sticklers for detail? Why, they’d hold you to an accurate description of the first time you ever made love, expecting you to remember the color of the room and the shape of the windows.
    Lyndon Baines Johnson (1908–1973)

    It is possible—indeed possible even according to the old conception of logic—to give in advance a description of all ‘true’ logical propositions. Hence there can never be surprises in logic.
    Ludwig Wittgenstein (1889–1951)

    A sound mind in a sound body, is a short, but full description of a happy state in this World: he that has these two, has little more to wish for; and he that wants either of them, will be little the better for anything else.
    John Locke (1632–1704)