Tikhonov Regularization

Tikhonov regularization, named for Andrey Tikhonov, is the most commonly used method of regularization of ill-posed problems. In statistics, the method is known as ridge regression, and, with multiple independent discoveries, it is also variously known as the Tikhonov–Miller method, the Phillips–Twomey method, the constrained linear inversion method, and the method of linear regularization. It is related to the Levenberg–Marquardt algorithm for non-linear least-squares problems.

When the following problem is not well posed (either because of non-existence or non-uniqueness of )

then the standard approach is known as ordinary least squares and seeks to minimize the residual

where is the Euclidean norm. This may be due to the system being overdetermined or underdetermined ( may be ill-conditioned or singular). In the latter case this is no better than the original problem. In order to give preference to a particular solution with desirable properties, the regularization term is included in this minimization:

for some suitably chosen Tikhonov matrix, . In many cases, this matrix is chosen as the identity matrix, giving preference to solutions with smaller norms. In other cases, lowpass operators (e.g., a difference operator or a weighted Fourier operator) may be used to enforce smoothness if the underlying vector is believed to be mostly continuous. This regularization improves the conditioning of the problem, thus enabling a numerical solution. An explicit solution, denoted by, is given by:

The effect of regularization may be varied via the scale of matrix . For this reduces to the unregularized least squares solution provided that (ATA)−1 exists.

Read more about Tikhonov Regularization:  Bayesian Interpretation, Generalized Tikhonov Regularization, Regularization in Hilbert Space, Relation To Singular Value Decomposition and Wiener Filter, Determination of The Tikhonov Factor, Relation To Probabilistic Formulation, History