Signal reconstruction

In signal processing, reconstruction usually means the determination of an original continuous signal from a sequence of equally spaced samples.

This article takes a generalized abstract mathematical approach to signal sampling and reconstruction. For a more practical approach based on band-limited signals, see Whittaker–Shannon interpolation formula.

General principle

Let F be any sampling method, i.e. a linear map from the Hilbert space of square-integrable functions to complex space .

In our example, the vector space of sampled signals is n-dimensional complex space. Any proposed inverse R of F (reconstruction formula, in the lingo) would have to map to some subset of . We could choose this subset arbitrarily, but if we're going to want a reconstruction formula R that is also a linear map, then we have to choose an n-dimensional linear subspace of .

This fact that the dimensions have to agree is related to the Nyquist–Shannon sampling theorem.

The elementary linear algebra approach works here. Let (all entries zero, except for the kth entry, which is a one) or some other basis of . To define an inverse for F, simply choose, for each k, an so that . This uniquely defines the (pseudo-)inverse of F.

Of course, one can choose some reconstruction formula first, then either compute some sampling algorithm from the reconstruction formula, or analyze the behavior of a given sampling algorithm with respect to the given formula.

Ideally, the reconstruction formula is derived by minimizing the expected error variance. This requires that either the signal statistics is known or a prior probability for the signal can be specified. Information field theory is then an appropriate mathematical formalism to derive an optimal reconstruction formula.[1]

Perhaps the most widely used reconstruction formula is as follows. Let be a basis of in the Hilbert space sense; for instance, one could use the eikonal

,

although other choices are certainly possible. Note that here the index k can be any integer, even negative.

Then we can define a linear map R by

for each , where is the basis of given by

(This is the usual discrete Fourier basis.)

The choice of range is somewhat arbitrary, although it satisfies the dimensionality requirement and reflects the usual notion that the most important information is contained in the low frequencies. In some cases, this is incorrect, so a different reconstruction formula needs to be chosen.

A similar approach can be obtained by using wavelets instead of Hilbert bases. For many applications, the best approach is still not clear today.

gollark: Although often Java beats it.
gollark: It's not actually simple, although it's *somewhat* fast?
gollark: It's just generally so hostile to abstraction.
gollark: I mean, it has such an awful type system, and poor concurrency, and tooling.
gollark: But I agree quite a lot, Go is just so *bad* and yet so popular?

See also

References

  1. "Information field theory". Max Planck Society. Retrieved 13 Nov 2014.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.