Image-based modeling and rendering

In computer graphics and computer vision, image-based modeling and rendering (IBMR) methods rely on a set of two-dimensional images of a scene to generate a three-dimensional model and then render some novel views of this scene.

The traditional approach of computer graphics has been used to create a geometric model in 3D and try to reproject it onto a two-dimensional image. Computer vision, conversely, is mostly focused on detecting, grouping, and extracting features (edges, faces, etc.) present in a given picture and then trying to interpret them as three-dimensional clues. Image-based modeling and rendering allows the use of multiple two-dimensional images in order to generate directly novel two-dimensional images, skipping the manual modeling stage.

Light modeling

Instead of considering only the physical model of a solid, IBMR methods usually focus more on light modeling. The fundamental concept behind IBMR is the plenoptic illumination function which is a parametrisation of the light field. The plenoptic function describes the light rays contained in a given volume. It can be represented with seven dimensions: a ray is defined by its position , its orientation , its wavelength and its time : . IBMR methods try to approximate the plenoptic function to render a novel set of two-dimensional images from another. Given the high dimensionality of this function, practical methods place constraints on the parameters in order to reduce this number (typically to 2 to 4).

IBMR methods and algorithms

  • View morphing generates a transition between images
  • Panoramic imaging renders panoramas using image mosaics of individual still images
  • Lumigraph relies on a dense sampling of a scene
  • Space carving generates a 3D model based on a photo-consistency check
gollark: <@346637662507237381>
gollark: 𝑨𝒍𝒍 𝒉𝒂𝒊𝒍 𝒕𝒉𝒆 𝑼𝒏𝒊𝒄𝒐𝒅𝒆 𝑪𝒐𝒏𝒔𝒐𝒓𝒕𝒊𝒖𝒎.
gollark: 𝕀𝕟𝕤𝕥𝕒𝕝𝕝 𝕡𝕠𝕥𝕒𝕥𝕆𝕊 𝕥𝕠𝕕𝕒𝕪!
gollark: Us, apparently. That should be obvious.
gollark: AutoBotRobot is a *role*?

See also

References

    • Quan, Long. Image-based modeling. Springer Science & Business Media, 2010.
    • Ce Zhu; Shuai Li (2016). "Depth Image Based View Synthesis: New Insights and Perspectives on Hole Generation and Filling". IEEE Transactions on Broadcasting. 62 (1): 82–93. doi:10.1109/TBC.2015.2475697.
    • Mansi Sharma; Santanu Chaudhury; Brejesh Lall; M.S. Venkatesh (2014). "A flexible architecture for multi-view 3DTV based on uncalibrated cameras". Journal of Visual Communication and Image Representation. 25 (4): 599–621. doi:10.1016/j.jvcir.2013.07.012.
    • Mansi Sharma; Santanu Chaudhury; Brejesh Lall (2014). Kinect-Variety Fusion: A Novel Hybrid Approach for Artifacts-Free 3DTV Content Generation. In 22nd International Conference on Pattern Recognition (ICPR), Stockholm, 2014. doi:10.1109/ICPR.2014.395.
    • Mansi Sharma; Santanu Chaudhury; Brejesh Lall (2012). 3DTV view generation with virtual pan/tilt/zoom functionality. Proceedings of the Eighth Indian Conference on Computer Vision, Graphics and Image Processing, ACM New York, NY, USA. doi:10.1145/2425333.2425374.
    This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.