Camera auto-calibration

Camera auto-calibration is the process of determining internal camera parameters directly from multiple uncalibrated images of unstructured scenes. In contrast to classic camera calibration, auto-calibration does not require any special calibration objects in the scene. In the visual effects industry, camera auto-calibration is often part of the "Match Moving" process where a synthetic camera trajectory and intrinsic projection model are solved to reproject synthetic content into video.

Camera auto-calibration is a form of sensor ego-structure discovery; the subjective effects of the sensor are separated from the objective effects of the environment leading to a reconstruction of the perceived world without the bias applied by the measurement device. This is achieved via the fundamental assumption that images are projected from a Euclidean space through a linear, 5 degree of freedom (in the simplest case), pinhole camera model with non-linear optical distortion. The linear pinhole parameters are the focal length, the aspect ratio, the skew, and the 2D principal point. With only a set of uncalibrated (or calibrated) images, a scene may be reconstructed up to a six degree of freedom euclidean transform and an isotropic scaling.

A mathematical theory for general multi-view camera self-calibration was originally demonstrated in 1992 by Olivier Faugeras, QT Luong, and Stephen J. Maybank. In 3D scenes and general motions, each pair of views provides two constraints on the 5 degree-of-freedom calibration. Therefore, three views are the minimum needed for full calibration with fixed intrinsic parameters between views. Quality modern imaging sensors and optics may also provide further prior constraints on the calibration such as zero skew (orthogonal pixel grid) and unity aspect ratio (square pixels). Integrating these priors will reduce the minimal number of images needed to two. It is possible to auto-calibrate a sensor from a single image given supporting information in a structured scene. For example, calibration may be obtained if multiple sets of parallel lines or objects with a known shape (e.g. circular) are identified.

Problem statement

Given set of cameras and 3D points reconstructed up to projective ambiguity (using, for example, bundle adjustment method) we wish to define rectifying homography such that is a metric reconstruction. After that internal camera parameters can be easily calculated using camera matrix factorization .

Solution domains

  • Motions
    • General Motion
    • Purely Rotating Cameras
    • Planar Motion
    • Degenerate Motions
  • Scene Geometry
    • General Scenes with Depth Relief
    • Planar Scenes
    • Weak Perspective and Orthographic Imagers
    • Calibration Priors for Real Sensors
    • Nonlinear optical distortion

Algorithms of camera auto-calibration

  • Using the Kruppa equations. Historically the first auto-calibration algorithms. It bases on the correspondence of epipolar lines tangent to the absolute conic on the plane at infinity.
  • Using the absolute dual quadric and its projection, the dual image of the absolute conic
  • The modulus constraint
gollark: I would not.
gollark: I would be surprised if CC couldn't operate nanomachines via a relay.
gollark: I have Skynet for this. Also SPUDNET.
gollark: Yes, I know. It's just a nonsensical idea.
gollark: PotatOS could be transmitted via LabelNet in just 4 minutes.

References

  • O.D. Faugeras; Q.T. Luong; S.J. Maybank (1992). "Camera Self-Calibration: Theory and Experiments". ECCV. Lecture Notes in Computer Science. 588: 321–334. doi:10.1007/3-540-55426-2_37. ISBN 978-3-540-55426-4.
  • Q.T. Luong (1992). Matrice fondamentale et auto-calibration en vision par ordinateur. PhD Thesis, University of Paris, Orsay.
  • Q.T. Luong and Olivier D. Faugeras (1997). "Self-calibration of a moving camera from point correspondences and fundamental matrices". International Journal of Computer Vision. 22 (3): 261–289. doi:10.1023/A:1007982716991.
  • Olivier Faugeras and Q.T. Luong (2001). The Geometry of Multiple Images. MIT Press. ISBN 0-262-06220-8.
  • Richard Hartley; Andrew Zisserman (2003). Multiple View Geometry in computer vision. Cambridge University Press. ISBN 0-521-54051-8.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.