Wavefront coding

In optics and signal processing, wavefront coding refers to the use of a phase modulating element in conjunction with deconvolution to extend the depth of field of a digital imaging system such as a video camera.

Wavefront coding falls under the broad category of computational photography as a technique to enhance the depth of field.

Encoding

The wavefront of a light wave passing through the camera system is modulated using optical elements that introduce a spatially varying optical path length. The modulating elements must be placed at or near the plane of the aperture stop or pupil so that the same modulation is introduced for all field angles across the field-of-view. This modulation corresponds to a change in complex argument of the pupil function of such an imaging device, and it can be engineered with different goals in mind: e.g. extending the depth of focus.

Linear phase mask

Wavefront coding with linear phase masks works by creating an optical transfer function that encodes distance information.[1]

Cubic phase mask

Wavefront Coding with cubic phase masks works to blur the image uniformly using a cubic shaped waveplate so that the intermediate image, the optical transfer function, is out of focus by a constant amount. Digital image processing then removes the blur and introduces noise depending upon the physical characteristics of the processor. Dynamic range is sacrificed to extend the depth of field depending upon the type of filter used. It can also correct optical aberration.[2]

The mask was developed by using the ambiguity function and the stationary phase method

History

The technique was pioneered by radar engineer Edward Dowski and his thesis adviser Thomas Cathey at the University of Colorado in the United States in the 1990s. After the university showed little interest in the research[3] they have since founded a company to commercialize the method called CDM-Optics. The company was acquired in 2005 by OmniVision Technologies, which has released wavefront-coding-based mobile camera chips as TrueFocus sensors.

TrueFocus sensors are able to simulate older autofocus technologies that use rangefinders and narrow depth of fields.[4] In fact, the technology theoretically allows for any number of combinations of focal points per pixel for effect. It is the only technology not limited to EDoF (Extended-Depth-of-Field).

gollark: I can do about four views/second, which should probably be enough since this stuff is generally capped by UVs.
gollark: I still can't access AoND myself.
gollark: Is the ND AR thing up?
gollark: Maybe AR-only ones turn out better? Who knows.
gollark: That would explain the lack of UVs.

References

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.