Levinson Recursion

Levinson recursion or Levinson–Durbin recursion is a procedure in linear algebra to recursively calculate the solution to an equation involving a Toeplitz matrix. The algorithm runs in Θ(n2) time, which is a strong improvement over Gauss–Jordan elimination, which runs in Θ(n3).

Newer algorithms, called asymptotically fast or sometimes superfast Toeplitz algorithms, can solve in Θ(n logpn) for various p (e.g. p = 2, p = 3 ). Levinson recursion remains popular for several reasons; for one, it is relatively easy to understand in comparison; for another, it can be faster than a superfast algorithm for small n (usually n < 256).

The Levinson-Durbin algorithm was proposed first by Norman Levinson in 1947, improved by James Durbin in 1960, and subsequently improved to 4n2 and then 3n2 multiplications by W. F. Trench and S. Zohar, respectively.

Other methods to process data include Schur decomposition and Cholesky decomposition. In comparison to these, Levinson recursion (particularly Split-Levinson recursion) tends to be faster computationally, but more sensitive to computational inaccuracies like round-off errors.

Read more about Levinson Recursion:  Block Levinson Algorithm