Woodbury matrix identity

In mathematics (specifically linear algebra), the Woodbury matrix identity, named after Max A. Woodbury[1][2] says that the inverse of a rank-k correction of some matrix can be computed by doing a rank-k correction to the inverse of the original matrix. Alternative names for this formula are the matrix inversion lemma, Sherman–Morrison–Woodbury formula or just Woodbury formula. However, the identity appeared in several papers before the Woodbury report.[3]

The Woodbury matrix identity is[4]

where A, U, C and V all denote matrices of the correct (conformable) sizes. Specifically, A is n-by-n, U is n-by-k, C is k-by-k and V is k-by-n. This can be derived using blockwise matrix inversion.

While the identity is primarily used on matrices, it holds in a general ring or in an Ab-category.

Discussion

To prove this result, we will start by proving a simpler one. Replacing A and C with the identity matrix I, we obtain another identity which is a bit simpler:

To recover the original equation from this reduced identity, set and .

This identity itself can be viewed as the combination of two simpler identities. We obtain the first identity from

,

thus,

,

and similarly

The second identity is the so-called push-through identity[5]

that we obtain from

after multiplying by on the right and by on the left.

Special cases

When are vectors, the identity reduces to the Sherman–Morrison formula.

In the scalar case it (the reduced version) is simply

Inverse of a sum

If p = q and U = V = Ip is the identity matrix, then

Continuing with the merging of the terms of the far right-hand side of the above equation results in Hua's identity

Another useful form of the same identity is

which has a recursive structure that yields

This form can be used in perturbative expansions where B is a perturbation of A.

Variations

Binomial inverse theorem

If A, U, B, V are matrices of sizes p×p, p×q, q×q, q×p, respectively, then

provided A and B + BVA−1UB are nonsingular. Nonsingularity of the latter requires that B−1 exist since it equals B(I + VA−1UB) and the rank of the latter cannot exceed the rank of B.[5]

Since B is invertible, the two B terms flanking the parenthetical quantity inverse in the right-hand side can be replaced with (B−1)−1, which results in the original Woodbury identity.

A variation for when B is singular and possibly even non-square:[5]

Formulas also exist for certain cases in which A is singular.[6]

Derivations

Direct proof

The formula can be proven by checking that times its alleged inverse on the right side of the Woodbury identity gives the identity matrix:

Alternative proofs

Algebraic proof

First consider these useful identities,

Now,

Derivation via blockwise elimination

Deriving the Woodbury matrix identity is easily done by solving the following block matrix inversion problem

Expanding, we can see that the above reduces to

which is equivalent to . Eliminating the first equation, we find that , which can be substituted into the second to find . Expanding and rearranging, we have , or . Finally, we substitute into our , and we have . Thus,

We have derived the Woodbury matrix identity.

Derivation from LDU decomposition

We start by the matrix

By eliminating the entry under the A (given that A is invertible) we get

Likewise, eliminating the entry above C gives

Now combining the above two, we get

Moving to the right side gives

which is the LDU decomposition of the block matrix into an upper triangular, diagonal, and lower triangular matrices.

Now inverting both sides gives

We could equally well have done it the other way (provided that C is invertible) i.e.

Now again inverting both sides,

Now comparing elements (1, 1) of the RHS of (1) and (2) above gives the Woodbury formula

Applications

This identity is useful in certain numerical computations where A1 has already been computed and it is desired to compute (A + UCV)1. With the inverse of A available, it is only necessary to find the inverse of C−1 + VA−1U in order to obtain the result using the right-hand side of the identity. If C has a much smaller dimension than A, this is more efficient than inverting A + UCV directly. A common case is finding the inverse of a low-rank update A + UCV of A (where U only has a few columns and V only a few rows), or finding an approximation of the inverse of the matrix A + B where the matrix B can be approximated by a low-rank matrix UCV, for example using the singular value decomposition.

This is applied, e.g., in the Kalman filter and recursive least squares methods, to replace the parametric solution, requiring inversion of a state vector sized matrix, with a condition equations based solution. In case of the Kalman filter this matrix has the dimensions of the vector of observations, i.e., as small as 1 in case only one new observation is processed at a time. This significantly speeds up the often real time calculations of the filter.

In the case when C is the identity matrix I, the matrix is known in numerical linear algebra and numerical partial differential equations as the capacitance matrix.[3]

gollark: Electron apps < osmarks.tk experiments
gollark: No.
gollark: How steep is it?
gollark: What is the coefficient of friction of the slope anyway?
gollark: The slippery slope definitional confusion/fiddling slippery slope.

See also

Notes

  1. Max A. Woodbury, Inverting modified matrices, Memorandum Rept. 42, Statistical Research Group, Princeton University, Princeton, NJ, 1950, 4pp MR38136
  2. Max A. Woodbury, The Stability of Out-Input Matrices. Chicago, Ill., 1949. 5 pp. MR32564
  3. Hager, William W. (1989). "Updating the inverse of a matrix". SIAM Review. 31 (2): 221–239. doi:10.1137/1031049. JSTOR 2030425. MR 0997457.
  4. Higham, Nicholas (2002). Accuracy and Stability of Numerical Algorithms (2nd ed.). SIAM. p. 258. ISBN 978-0-89871-521-7. MR 1927606.
  5. Henderson, H. V.; Searle, S. R. (1981). "On deriving the inverse of a sum of matrices" (PDF). SIAM Review. 23: 53–60. doi:10.1137/1023004. JSTOR 2029838.
  6. Kurt S. Riedel, "A Sherman–Morrison–Woodbury Identity for Rank Augmenting Matrices with Application to Centering", SIAM Journal on Matrix Analysis and Applications, 13 (1992)659-662, doi:10.1137/0613040 preprint MR1152773
  • Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 2.7.3. Woodbury Formula", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.