Tikhonov regularization
Tikhonov regularization, named for Andrey Tikhonov, is a method of regularization of ill-posed problems. Also known as ridge regression,[lower-alpha 1] it is particularly useful to mitigate the problem of multicollinearity in linear regression, which commonly occurs in models with large numbers of parameters.[1] In general, the method provides improved efficiency in parameter estimation problems in exchange for a tolerable amount of bias (see bias–variance tradeoff).[2]
Part of a series on |
Regression analysis |
---|
Models |
Estimation |
Background |
|
In the simplest case, the problem of a near-singular moment matrix is alleviated by adding positive elements to the diagonals, thereby decreasing its condition number. Analogous to the ordinary least squares estimator, the simple ridge estimator is then given by
where is the regressand, is the design matrix, is the identity matrix, and the ridge parameter serves as the constant shifting the diagonals of the moment matrix.[3] It can be shown that this estimator is the solution to the least squares problem subject to the constraint , which can be expressed as a Lagrangian:
which shows that is nothing but the Lagrange multiplier of the constraint. In the case of , in which the constraint is non-binding, the ridge estimator reduces to ordinary least squares. A more general approach to Tikhonov regularization is discussed below.
History
Tikhonov regularization has been invented independently in many different contexts. It became widely known from its application to integral equations from the work of Andrey Tikhonov[4][5][6][7][8] and David L. Phillips.[9] Some authors use the term Tikhonov–Phillips regularization. The finite-dimensional case was expounded by Arthur E. Hoerl, who took a statistical approach,[10] and by Manus Foster, who interpreted this method as a Wiener–Kolmogorov (Kriging) filter.[11] Following Hoerl, it is known in the statistical literature as ridge regression.[12]
Tikhonov regularization
Suppose that for a known matrix and vector , we wish to find a vector such that
The standard approach is ordinary least squares linear regression. However, if no satisfies the equation or more than one does—that is, the solution is not unique—the problem is said to be ill posed. In such cases, ordinary least squares estimation leads to an overdetermined, or more often an underdetermined system of equations. Most real-world phenomena have the effect of low-pass filters in the forward direction where maps to . Therefore, in solving the inverse-problem, the inverse mapping operates as a high-pass filter that has the undesirable tendency of amplifying noise (eigenvalues / singular values are largest in the reverse mapping where they were smallest in the forward mapping). In addition, ordinary least squares implicitly nullifies every element of the reconstructed version of that is in the null-space of , rather than allowing for a model to be used as a prior for . Ordinary least squares seeks to minimize the sum of squared residuals, which can be compactly written as
where is the Euclidean norm.
In order to give preference to a particular solution with desirable properties, a regularization term can be included in this minimization:
for some suitably chosen Tikhonov matrix . In many cases, this matrix is chosen as a multiple of the identity matrix (), giving preference to solutions with smaller norms; this is known as L2 regularization.[13] In other cases, high-pass operators (e.g., a difference operator or a weighted Fourier operator) may be used to enforce smoothness if the underlying vector is believed to be mostly continuous. This regularization improves the conditioning of the problem, thus enabling a direct numerical solution. An explicit solution, denoted by , is given by
The effect of regularization may be varied by the scale of matrix . For this reduces to the unregularized least-squares solution, provided that (ATA)−1 exists.
L2 regularization is used in many contexts aside from linear regression, such as classification with logistic regression or support vector machines,[14] and matrix factorization.[15]
Generalized Tikhonov regularization
For general multivariate normal distributions for and the data error, one can apply a transformation of the variables to reduce to the case above. Equivalently, one can seek an to minimize
where we have used to stand for the weighted norm squared (compare with the Mahalanobis distance). In the Bayesian interpretation is the inverse covariance matrix of , is the expected value of , and is the inverse covariance matrix of . The Tikhonov matrix is then given as a factorization of the matrix (e.g. the Cholesky factorization) and is considered a whitening filter.
This generalized problem has an optimal solution which can be written explicitly using the formula
or equivalently
Lavrentyev regularization
In some situations, one can avoid using the transpose , as proposed by Mikhail Lavrentyev.[16] For example, if is symmetric positive definite, i.e. , so is its inverse , which can thus be used to set up the weighted norm squared in the generalized Tikhonov regularization, leading to minimizing
or, equivalently up to a constant term,
- .
This minimization problem has an optimal solution which can be written explicitly using the formula
- ,
which is nothing but the solution of the generalized Tikhonov problem where
The Lavrentyev regularization, if applicable, is advantageous to the original Tikhonov regularization, since the Lavrentyev matrix can be better conditioned, i.e., have a smaller condition number, compared to the Tikhonov matrix
Regularization in Hilbert space
Typically discrete linear ill-conditioned problems result from discretization of integral equations, and one can formulate a Tikhonov regularization in the original infinite-dimensional context. In the above we can interpret as a compact operator on Hilbert spaces, and and as elements in the domain and range of . The operator is then a self-adjoint bounded invertible operator.
Relation to singular-value decomposition and Wiener filter
With , this least-squares solution can be analyzed in a special way using the singular-value decomposition. Given the singular value decomposition
with singular values , the Tikhonov regularized solution can be expressed as
where has diagonal values
and is zero elsewhere. This demonstrates the effect of the Tikhonov parameter on the condition number of the regularized problem. For the generalized case, a similar representation can be derived using a generalized singular-value decomposition.[17]
Finally, it is related to the Wiener filter:
where the Wiener weights are and is the rank of .
Determination of the Tikhonov factor
The optimal regularization parameter is usually unknown and often in practical problems is determined by an ad hoc method. A possible approach relies on the Bayesian interpretation described below. Other approaches include the discrepancy principle, cross-validation, L-curve method,[18] restricted maximum likelihood and unbiased predictive risk estimator. Grace Wahba proved that the optimal parameter, in the sense of leave-one-out cross-validation minimizes[19][20]
where is the residual sum of squares, and is the effective number of degrees of freedom.
Using the previous SVD decomposition, we can simplify the above expression:
and
Relation to probabilistic formulation
The probabilistic formulation of an inverse problem introduces (when all uncertainties are Gaussian) a covariance matrix representing the a priori uncertainties on the model parameters, and a covariance matrix representing the uncertainties on the observed parameters.[21] In the special case when these two matrices are diagonal and isotropic, and , and, in this case, the equations of inverse theory reduce to the equations above, with .
Bayesian interpretation
Although at first the choice of the solution to this regularized problem may look artificial, and indeed the matrix seems rather arbitrary, the process can be justified from a Bayesian point of view. Note that for an ill-posed problem one must necessarily introduce some additional assumptions in order to get a unique solution. Statistically, the prior probability distribution of is sometimes taken to be a multivariate normal distribution. For simplicity here, the following assumptions are made: the means are zero; their components are independent; the components have the same standard deviation . The data are also subject to errors, and the errors in are also assumed to be independent with zero mean and standard deviation . Under these assumptions the Tikhonov-regularized solution is the most probable solution given the data and the a priori distribution of , according to Bayes' theorem.[22]
If the assumption of normality is replaced by assumptions of homoscedasticity and uncorrelatedness of errors, and if one still assumes zero mean, then the Gauss–Markov theorem entails that the solution is the minimal unbiased estimator.[23]
See also
- LASSO estimator is another regularization method in statistics.
- Elastic net regularization
- Matrix regularization
Notes
- In statistics, the method is known as ridge regression, in machine learning it is known as weight decay, and with multiple independent discoveries, it is also variously known as the Tikhonov–Miller method, the Phillips–Twomey method, the constrained linear inversion method, and the method of linear regularization. It is related to the Levenberg–Marquardt algorithm for non-linear least-squares problems.
References
- Kennedy, Peter (2003). A Guide to Econometrics (Fifth ed.). Cambridge: The MIT Press. pp. 205–206. ISBN 0-262-61183-X.
- Gruber, Marvin (1998). Improving Efficiency by Shrinkage: The James–Stein and Ridge Regression Estimators. Boca Raton: CRC Press. pp. 7–15. ISBN 0-8247-0156-9.
- For the choice of in practice, see Khalaf, Ghadban; Shukur, Ghazi (2005). "Choosing Ridge Parameter for Regression Problems". Communications in Statistics – Theory and Methods. 34 (5): 1177–1182. doi:10.1081/STA-200056836.
- Tikhonov, Andrey Nikolayevich (1943). "Об устойчивости обратных задач" [On the stability of inverse problems]. Doklady Akademii Nauk SSSR. 39 (5): 195–198.
- Tikhonov, A. N. (1963). "О решении некорректно поставленных задач и методе регуляризации". Doklady Akademii Nauk SSSR. 151: 501–504.. Translated in "Solution of incorrectly formulated problems and the regularization method". Soviet Mathematics. 4: 1035–1038.
- Tikhonov, A. N.; V. Y. Arsenin (1977). Solution of Ill-posed Problems. Washington: Winston & Sons. ISBN 0-470-99124-0.
- Tikhonov, Andrey Nikolayevich; Goncharsky, A.; Stepanov, V. V.; Yagola, Anatolij Grigorevic (30 June 1995). Numerical Methods for the Solution of Ill-Posed Problems. Netherlands: Springer Netherlands. ISBN 079233583X. Retrieved 9 August 2018.
- Tikhonov, Andrey Nikolaevich; Leonov, Aleksandr S.; Yagola, Anatolij Grigorevic (1998). Nonlinear ill-posed problems. London: Chapman & Hall. ISBN 0412786605. Retrieved 9 August 2018.
- Phillips, D. L. (1962). "A Technique for the Numerical Solution of Certain Integral Equations of the First Kind". Journal of the ACM. 9: 84–97. doi:10.1145/321105.321114.
- Hoerl, Arthur E. (1962). "Application of Ridge Analysis to Regression Problems". Chemical Engineering Progress. 58 (3): 54–59.
- Foster, M. (1961). "An Application of the Wiener-Kolmogorov Smoothing Theory to Matrix Inversion". Journal of the Society for Industrial and Applied Mathematics. 9 (3): 387–392. doi:10.1137/0109031.
- Hoerl, A. E.; R. W. Kennard (1970). "Ridge regression: Biased estimation for nonorthogonal problems". Technometrics. 12 (1): 55–67. doi:10.1080/00401706.1970.10488634.
- Ng, Andrew Y. (2004). Feature selection, L1 vs. L2 regularization, and rotational invariance (PDF). Proc. ICML.
- R.-E. Fan; K.-W. Chang; C.-J. Hsieh; X.-R. Wang; C.-J. Lin (2008). "LIBLINEAR: A library for large linear classification". Journal of Machine Learning Research. 9: 1871–1874.
- Guan, Naiyang; Tao, Dacheng; Luo, Zhigang; Yuan, Bo (2012). "Online nonnegative matrix factorization with robust stochastic approximation". IEEE Transactions on Neural Networks and Learning Systems. 23 (7): 1087–1099. doi:10.1109/TNNLS.2012.2197827. PMID 24807135.
- Lavrentiev, M. M. (1967). Some Improperly Posed Problems of Mathematical Physics. New York: Springer.
- Hansen, Per Christian (Jan 1, 1998). Rank-Deficient and Discrete Ill-Posed Problems: Numerical Aspects of Linear Inversion (1st ed.). Philadelphia, USA: SIAM. ISBN 9780898714036.
- P. C. Hansen, "The L-curve and its use in the numerical treatment of inverse problems",
- Wahba, G. (1990). "Spline Models for Observational Data". CBMS-NSF Regional Conference Series in Applied Mathematics. Society for Industrial and Applied Mathematics. Bibcode:1990smod.conf.....W.
- Golub, G.; Heath, M.; Wahba, G. (1979). "Generalized cross-validation as a method for choosing a good ridge parameter" (PDF). Technometrics. 21 (2): 215–223. doi:10.1080/00401706.1979.10489751.
- Tarantola, Albert (2005). Inverse Problem Theory and Methods for Model Parameter Estimation (1st ed.). Philadelphia: Society for Industrial and Applied Mathematics (SIAM). ISBN 0898717922. Retrieved 9 August 2018.
- Vogel, Curtis R. (2002). Computational methods for inverse problems. Philadelphia: Society for Industrial and Applied Mathematics. ISBN 0-89871-550-4.
- Amemiya, Takeshi (1985). Advanced Econometrics. Harvard University Press. pp. 60–61. ISBN 0-674-00560-0.
Further reading
- Gruber, Marvin (1998). Improving Efficiency by Shrinkage: The James–Stein and Ridge Regression Estimators. Boca Raton: CRC Press. ISBN 0-8247-0156-9.
- Kress, Rainer (1998). "Tikhonov Regularization". Numerical Analysis. New York: Springer. pp. 86–90. ISBN 0-387-98408-9.
- Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. (2007). "Section 19.5. Linear Regularization Methods". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8.
- Saleh, A. K. Md. Ehsanes; Arashi, Mohammad; Kibria, B. M. Golam (2019). Theory of Ridge Regression Estimation with Applications. New York: John Wiley & Sons. ISBN 978-1-118-64461-4.
- Taddy, Matt (2019). "Regularization". Business Data Science: Combining Machine Learning and Economics to Optimize, Automate, and Accelerate Business Decisions. New York: McGraw-Hill. pp. 69–104. ISBN 978-1-260-45277-8.