Least-squares function approximation

In mathematics, least squares function approximation applies the principle of least squares to function approximation, by means of a weighted sum of other functions. The best approximation can be defined as that which minimises the difference between the original function and the approximation; for a least-squares approach the quality of the approximation is measured in terms of the squared differences between the two.

Functional analysis

A generalization to approximation of a data set is the approximation of a function by a sum of other functions, usually an orthogonal set:[1]

with the set of functions {} an orthonormal set over the interval of interest, say [a, b]: see also Fejér's theorem. The coefficients {} are selected to make the magnitude of the difference ||ffn||2 as small as possible. For example, the magnitude, or norm, of a function g (x ) over the interval [a, b] can be defined by:[2]

where the ‘*’ denotes complex conjugate in the case of complex functions. The extension of Pythagoras' theorem in this manner leads to function spaces and the notion of Lebesgue measure, an idea of “space” more general than the original basis of Euclidean geometry. The { } satisfy orthonormality relations:[3]

where δij is the Kronecker delta. Substituting function fn into these equations then leads to the n-dimensional Pythagorean theorem:[4]

The coefficients {aj} making ||ffn||2 as small as possible are found to be:[1]

The generalization of the n-dimensional Pythagorean theorem to infinite-dimensional real inner product spaces is known as Parseval's identity or Parseval's equation.[5] Particular examples of such a representation of a function are the Fourier series and the generalized Fourier series.

Further discussion

Using linear algebra

It follows that one can find a "best" approximation of another function by minimizing the area between two functions, a continuous function on and a function where is a subspace of :

all within the subspace . Due to the frequent difficulty of evaluating integrands involving absolute value, one can instead define

as an adequate criterion for obtaining the least squares approximation, function , of with respect to the inner product space .

As such, or, equivalently, , can thus be written in vector form:

In other words, the least squares approximation of is the function closest to in terms of the inner product . Furthermore, this can be applied with a theorem:

Let be continuous on , and let be a finite-dimensional subspace of . The least squares approximating function of with respect to is given by
where is an orthonormal basis for .
gollark: <@241757436720054273> please vote gibson.
gollark: As sound said, the rules are entirely fine with treason.
gollark: It's treason you're permitting, so meh.
gollark: ++delete <@!230696474734755841> (continued badness)
gollark: Yes. This would be an example of !lyric!bad!.

References

  1. Cornelius Lanczos (1988). Applied analysis (Reprint of 1956 Prentice–Hall ed.). Dover Publications. pp. 212–213. ISBN 0-486-65656-X.
  2. Gerald B Folland (2009). "Equation 3.14". Fourier analysis and its application (Reprint of Wadsworth and Brooks/Cole 1992 ed.). American Mathematical Society Bookstore. p. 69. ISBN 0-8218-4790-2.
  3. Folland, Gerald B (2009). Fourier Analysis and Its Applications. American Mathematical Society. p. 69. ISBN 0-8218-4790-2.
  4. David J. Saville, Graham R. Wood (1991). "§2.5 Sum of squares". Statistical methods: the geometric approach (3rd ed.). Springer. p. 30. ISBN 0-387-97517-9.
  5. Gerald B Folland (2009-01-13). "Equation 3.22". cited work. p. 77. ISBN 0-8218-4790-2.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.