Texture mapping is a method for mapping a texture on a . Texture here can be high frequency detail, surface texture, or color.
The original technique was pioneered by Edwin Catmull in 1974.
Texture mapping originally referred to diffuse mapping, a method that simply mapped pixels from a texture to a 3D surface ("wrapping" the image around the object). In recent decades, the advent of multi-pass rendering, multitexturing, mipmaps, and more complex mappings such as height mapping, bump mapping, normal mapping, displacement mapping, reflection mapping, specular mapping, occlusion mapping, and many other variations on the technique (controlled by a materials system) have made it possible to simulate near-photorealism in real time by vastly reducing the number of polygons and lighting calculations needed to construct a realistic and functional 3D scene.
A is an image applied (mapped) to the surface of a shape or polygon. This may be a or a procedural texture. They may be stored in common , referenced by 3d model formats or material definitions, and assembled into resource bundles.
They may have 1-3 dimensions, although 2 dimensions are most common for visible surfaces. For use with modern hardware, texture map data may be stored in swizzled or tiled orderings to improve cache coherency. Rendering APIs typically manage texture map resources (which may be located in device memory) as buffers or surfaces, and may allow 'render to texture' for additional effects such as post processing or environment mapping.
They usually contain RGB color data (either stored as direct color, compressed formats, or indexed color), and sometimes an additional channel for alpha blending (RGBA) especially for billboards and decal overlay textures. It is possible to use the alpha channel (which may be convenient to store in formats parsed by hardware) for other uses such as specularity.
Multiple texture maps (or ) may be combined for control over specularity, normals, displacement, or subsurface scattering e.g. for skin rendering.