Graphics Rendering
Graphics Rendering is the process of composing the images that a player sees on the screen while playing a game. Quite a few tasks are needed to produce even a simple graphical game like Super Mario Bros. to a more modern game like Gears of War or Modern Warfare. Since 2D and 3D drawing methods are entirely different, they'll be covered in different sections.
2D Graphics rendering
2D graphics can be summed up in two different methods: Raster graphics and Vector graphics.
Raster Graphics
Raster graphics is the most common method, it generally involves drawing two kinds of things: sprites and layers.
A layer is, conceptually, a single, whole image. Each layer is rendered to the screen in a specific order; therefore, layers rendered after earlier ones appear in front of them. Layers on NES-style 2D rendering hardware (basically any exclusively 2D hardware built after the NES, except for computers which weren't subject to this limitation) were built out of a small set of images called "tiles". The layer's image was composed of an amalgamation of these tiles; this technique is called "tilemapping". This is what led to the tilemaps of NES-style 2D rendering. In modern 2D games, layers can be and often are fully hand-drawn images.
Depending on the available hardware, special blending features can be used when rendering a layer. This can allow a layer to be translucent or have some other visual effect on the layers beneath it. Usually, the UI or HUD in a 2D game is the last layer to be rendered, so it appears over everything.
Layers tend to be relatively fixed in position. Sprites by contrast are smaller images that can move and animate. Sprites are generally rendered between layers. In NES-style hardware, all sprites tended to be rendered between certain layers, but this is hardly necessary today. Most things you think of as characters or enemies, generally anything the physically moves its position, is a sprite.
Due to the way NES-style hardware handles sprite rendering, sprites on such hardware have both a fixed size and a maximum limit on the number of them that can appear in a single horizontal line at once. Going past this limit leads to flickering; this flickering was actually intentional on the part of the hardware designers. It makes sure that the player can see everything at least over a certain period of time; without it, one specific sprite would always be invisible.
Vector Graphics
Vector graphics on the other hand is usually a "mathematical" approach. Everything is rendered on the fly using points and lines that connect them. The first games to use vector graphics were Atari's Asteroids and Battlezone. The Vectrex was also a console based on vector graphics. Early vector graphics were simply "wireframes" of the model or image being rendered, hence there was a lack of color other than the outline. Eventually this model faded out of popularity until the advent of Macromedia (now Adobe's) Flash, which allowed the spaces between vector lines to be filled with color (although professional graphics tools could also do this).
The advantage of vector graphics is its infinite scalability. Because everything is created on the fly, a low resolution vector image will look just as good as a high definition one. Whereas if you scaled a low resolution raster graphics, you would get an interpolated (blurred) or pixelated high resolution one, with ugly results. It also allows an artist to draw rather freely, with the graphics software rendering each input as a vector. The downside is that vector graphics are computationally expensive, evident in high quality Flash movies or games.
3D Graphics rendering
Much like 2D Graphics rendering, 3D has two main methods.
Voxel 3D Graphics
Voxels is a portmanteau of volumetric and pixel (or more precisely, volumetric picture element). In a 3D space, this would be the most basic element, akin to a pixel being the smallest element in a picture. In fact, a voxel model is basically like how raster graphics work. It's an old concept and is still relatively unused due to hardware constraints (See disadvantages).
There are a few reasons for a push for voxels.
- Voxels can represent a 3D object in a similar way that a picture represents a 2D one. Imagine what you can do to a 2D picture, and apply it with another dimension.
- Since voxels fill up a space to represent an object, you could break apart objects without the need for creating new geometry as in a polygon based renderer. You would simply break off a chunk of the object.
- Voxels can have their own color, eliminating the need for textures entirely.
However, there's still a few things to overcome:
- Voxels require a lot more memory than a 2D image (or even 3D models). A 16x16 with 1 byte per pixel for instance, requires 256 bytes to store. A 16x16x16 with 1 byte per voxel model, requires 4096 bytes. A way around this is to find groups of voxels and clump them together into one big voxel.
- Detailed voxel models are computationally expensive to setup. Hence they are limited mostly to major industries that need the detail.
Voxlap and Atomontage are currently the only known voxel based graphical engines with potential for games. Other games used voxel based technology for certain elements.
Polygonal 3D Graphics
Much like 2D vector graphics, polygonal 3D graphics go for an approximated mathematical approach to represent the object. The polygons themselves have all the benefits of vector graphics. The other elements are typically constrained to the same as raster graphics.
Computer generated scenes in 3D usually have the following primitive elements.
- Vertex: A 0-dimensional, essentially invisible, point in space that represents the coordinates of a polygon's vertex. This is the smallest unit of a 3D scene.
- Polygon: A polygon is a 2D plane that occupies a space between 3 or 4 vertices. Usually the shape of the plane is a triangle, as it can create other shapes with more triangles. Quadrilaterals were used in early computer graphics as well, since computationally they're less heavy.
- Texture: Polygons normally can have a flat color. A texture is some kind of image placed on a polygon to give it some features.
- Sprite: A 2D element in a 3D space. It's typically a flat, textured polygon that always faces the player no matter where the player is looking.
- Particle: Essentially a small 2D sprite that forms something more complex, like an explosion or smoke.
At the minimum, a 3D scene can created with vertices and polygons. But this doesn't create a very realistic look (think the first Star Fox game). The modern GPU now follows this pipeline, based on how Direct X 11 based graphics cards render a scene.
- Transform: All vertices and polygons are positioned in 3D space. This used to be primarily done by the processor until 1999.
- Geometry Instancing: In geometry instancing, a particular 3D model that's used often is loaded only once in the graphics card. Then entities using this model will point to it, rather than loading it again. However, each model can still act independently. This saves memory and rendering time.
- Primitives generation/Tessellation: Polygons can be added or subtracted here as necessary to create only enough detail. In the case of tessellation, with a displacement map, it can create great detail without too much of a performance loss. This step helps mostly to improve the silhouette of the object. Since polygons are created on the fly, this helps save memory, but not necessarily rendering time.
- Clipping: Now that the world is setup, it would be very inefficient to have to do lighting calculations on everything. Clipping decides what the player is seeing at the moment and removes all assets that cannot be seen. For example, if one is in a house, the 3D engine will not render rooms that are not immediately visible to the player. This is also why in certain cases that if you look straight up into the sky, the ground, or a wall, frame rates shoot up.
- Texture and lighting: This step is done one of three ways.
- Single pass: Each object is rendered with all lighting effects applied to it. Simple, but complexity increases with the number of light sources.
- Multipass shading: For each light, render the object that is affected by this light. Better, but complexity increases with the number of light sources and objects.
- Deferred shading: Lighting effects are split up into small jobs and applied to buffers. Everything is combined later. Increase in complexity is almost moot, it takes a deferred shader almost as long to do a single polygon with one light as it does to do a million polygons with dozens of lights.
- Rasterization: Up until now, everything has been in 3D. This is the process that takes everything in that 3D space and converts into a 2D image. Until we invent actual 3D displays, this is a necessary step.
- Post Processing: Special coloring effects are applied to the 2D output.
- Final Output: The image is buffered to be sent to the monitor.
Texturing has a few different methods, today most of them are used:
- Diffuse map (or albedo): Basically this is just a picture to provide what the object looks like. For example, a brick wall picture could be placed on a polygon to resemble one.
- Specular map: Allows for simple shiny effects.
- Cube map: A texture with an approximation of the world's surroundings. This is used in simple reflections.
- Bump map: A texture that affects how lighting is done on the surface of a polygon. It can make a flat surface look like it has features.
- Normal map: A type of bump map that stores a pixel's "direction" it's facing. This can make a seemingly flat object look like it was made of many polygons.
- Parallax map: A type of bump map that stores a pixel's "depth". Most commonly used with bricks, where the bricks look like they can obscure the cement holding them, but the entire wall is actually flat.
- Height/Displacement Map: Usually used for large landscapes, this texture describes how much a polygon or vertex should stick out.
Lighting also has its methods, though usually only one of these are done.
- Flat shading: Lighting is calculated on the polygonal level. It has a pixelated lighting look when used with fewer polygons.
- Gouraud (per-vertex) lighting: Lighting done on each vertex, each pixel is shaded accordingly based on the light the vertex received. It has a better lighting effect than flat shading, but it does produce some poor lighting results occasionally.
- Phong (Per-pixel) lighting: Lighting done on each individual pixel.
- High dynamic range lighting: In this case, dynamic range is the ratio of the darkest point versus the brightest point. Originally this was at most 1:256, with 1 being black and 256 being white. Since the laws of physics say that light being reflected, refracted, or going through a transparent object must lose some of its power, the sun for example could look as dark as a dim flashlight if conditions are right. If using high dynamic range lighting, this contrast ratio is increased significantly, but the final output is still lower contrast (due to Display Technology not being able to actually output said contrast). The sun using this type of lighting, will remain realistically bright now, even if it was dim before.