Reyes rendering is a computer software architecture used in 3D computer graphics to render photo-realistic images. It was developed in the mid-1980s by Loren Carpenter and Robert L. Cook at Lucasfilm's Computer Graphics Research Group, which is now Pixar. It was first used in 1982 to render images for the Genesis effect sequence in the movie Star Trek II: The Wrath of Khan. Pixar's RenderMan was one implementation of the Reyes algorithm, until its removal in 2016. According to the original paper describing the algorithm, the Reyes image rendering system is "An architecture for fast high-quality rendering of complex images." Reyes was proposed as a collection of algorithms and data processing systems. However, the terms "algorithm" and "architecture" have come to be used synonymously in this context and are used interchangeably in this article.
Reyes is an acronym for Renders Everything You Ever Saw (the name is also a pun on Point Reyes, California, near where Lucasfilm was located) and is suggestive of processes connected with optical imaging systems. According to Robert L. Cook, Reyes is written with only the first letter capitalized, as it is in the 1987 Cook/Carpenter/Catmull SIGGRAPH paper.
The architecture was designed with a number of goals in mind:
Model complexity/diversity: In order to generate visually complex and rich images, users of a rendering system need to be free to model large numbers (100,000s) of complex geometric structures possibly generated using procedural models such as fractals and particle systems.
Shading complexity: Much of the visual complexity in a scene is generated by the way in which light rays interact with solid object surfaces. Generally, in computer graphics, this is modelled using textures. Textures can be colored arrays of pixels, describe surface displacements or transparency or surface reflectivity. Reyes allows users to incorporate procedural shaders whereby surface structure and optical interaction is achieved using computer programs implementing procedural algorithms rather than simple look-up tables.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
3D computer graphics, sometimes called CGI, 3D-CGI or three-dimensional , are graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for the purposes of performing calculations and rendering , usually s but sometimes s. The resulting images may be stored for viewing later (possibly as an animation) or displayed in real time. 3D computer graphics, contrary to what the name suggests, are most often displayed on two-dimensional displays.
Covers the basics of ray tracing in computer graphics, explaining the generation of primary rays, intersection computations, and lighting models for diffuse and specular surfaces.
Deep implicit surfaces excel at modeling generic shapes but do not always capture the regularities present in manufactured objects, which is something simple geometric primitives are particularly good at. In this paper, we propose a representation combinin ...
2022
, ,
Physically based differentiable rendering has recently evolved into a powerful tool for solving inverse problems involving light. Methods in this area perform a differentiable simulation of the physical process of light transport and scattering to estimate ...
ASSOC COMPUTING MACHINERY2020
Computing light reflection from rough surfaces is an important topic in computer graphics. Reflection models developed based on geometric optics fail to capture wave effects such as diffraction and interference, while existing models based on physical opti ...