Rendering equationIn computer graphics, the rendering equation is an integral equation in which the equilibrium radiance leaving a point is given as the sum of emitted plus reflected radiance under a geometric optics approximation. It was simultaneously introduced into computer graphics by David Immel et al. and James Kajiya in 1986. The various realistic rendering techniques in computer graphics attempt to solve this equation. The physical basis for the rendering equation is the law of conservation of energy.
Reflection mappingIn computer graphics, environment mapping, or reflection mapping, is an efficient technique for approximating the appearance of a reflective surface by means of a precomputed texture. The texture is used to store the of the distant environment surrounding the rendered object. Several ways of storing the surrounding environment have been employed. The first technique was sphere mapping, in which a single texture contains the image of the surroundings as reflected on a spherical mirror.
Caustic (optics)In optics, a caustic or caustic network is the envelope of light rays which have been reflected or refracted by a curved surface or object, or the projection of that envelope of rays on another surface. The caustic is a curve or surface to which each of the light rays is tangent, defining a boundary of an envelope of rays as a curve of concentrated light. Therefore, in the photo to the right, caustics can be seen as patches of light or their bright edges. These shapes often have cusp singularities.
Shadow volumeShadow volume is a technique used in 3D computer graphics to add shadows to a rendered scene. They were first proposed by Frank Crow in 1977 as the geometry describing the 3D shape of the region occluded from a light source. A shadow volume divides the virtual world in two: areas that are in shadow and areas that are not. The stencil buffer implementation of shadow volumes is generally considered among the most practical general purpose real-time shadowing techniques for use on modern 3D graphics hardware.
MipmapIn computer graphics, mipmaps (also MIP maps) or pyramids are pre-calculated, optimized sequences of , each of which is a progressively lower representation of the previous. The height and width of each image, or level, in the mipmap is a factor of two smaller than the previous level. Mipmaps do not have to be square. They are intended to increase rendering speed and reduce aliasing artifacts. A high-resolution mipmap image is used for high-density samples, such as for objects close to the camera; lower-resolution images are used as the object appears farther away.
Ray castingRay casting is the methodological basis for 3D CAD/CAM solid modeling and image rendering. It is essentially the same as ray tracing for computer graphics where virtual light rays are "cast" or "traced" on their path from the focal point of a camera through each pixel in the camera sensor to determine what is visible along the ray in the 3D scene. The term "Ray Casting" was introduced by Scott Roth while at the General Motors Research Labs from 1978–1980.
Tessellation (computer graphics)In computer graphics, tessellation is the dividing of datasets of polygons (sometimes called vertex sets) presenting objects in a scene into suitable structures for rendering. Especially for real-time rendering, data is tessellated into triangles, for example in OpenGL 4.0 and Direct3D 11. A key advantage of tessellation for realtime graphics is that it allows detail to be dynamically added and subtracted from a 3D polygon mesh and its silhouette edges based on control parameters (often camera distance).
Scanline renderingScanline rendering (also scan line rendering and scan-line rendering) is an algorithm for visible surface determination, in 3D computer graphics, that works on a row-by-row basis rather than a polygon-by-polygon or pixel-by-pixel basis. All of the polygons to be rendered are first sorted by the top y coordinate at which they first appear, then each row or scan line of the image is computed using the intersection of a scanline with the polygons on the front of the sorted list, while the sorted list is updated to discard no-longer-visible polygons as the active scan line is advanced down the picture.
Texture filteringIn computer graphics, texture filtering or texture smoothing is the method used to determine the texture color for a texture mapped pixel, using the colors of nearby texels (pixels of the texture). There are two main categories of texture filtering, magnification filtering and minification filtering. Depending on the situation texture filtering is either a type of reconstruction filter where sparse data is interpolated to fill gaps (magnification), or a type of anti-aliasing (AA), where texture samples exist at a higher frequency than required for the sample frequency needed for texture fill (minification).
Non-photorealistic renderingNon-photorealistic rendering (NPR) is an area of computer graphics that focuses on enabling a wide variety of expressive styles for digital art, in contrast to traditional computer graphics, which focuses on photorealism. NPR is inspired by other artistic modes such as painting, drawing, technical illustration, and animated cartoons. NPR has appeared in movies and video games in the form of cel-shaded animation (also known as "toon" shading) as well as in scientific visualization, architectural illustration and experimental animation.