Scanline rendering (also scan line rendering and scan-line rendering) is an algorithm for visible surface determination, in 3D computer graphics, that works on a row-by-row basis rather than a polygon-by-polygon or pixel-by-pixel basis. All of the polygons to be rendered are first sorted by the top y coordinate at which they first appear, then each row or scan line of the image is computed using the intersection of a scanline with the polygons on the front of the sorted list, while the sorted list is updated to discard no-longer-visible polygons as the active scan line is advanced down the picture.
The main advantage of this method is that sorting vertices along the normal of the scanning plane reduces the number of comparisons between edges. Another advantage is that it is not necessary to translate the coordinates of all vertices from the main memory into the working memory—only vertices defining edges that intersect the current scan line need to be in active memory, and each vertex is read in only once. The main memory is often very slow compared to the link between the central processing unit and cache memory, and thus avoiding re-accessing vertices in main memory can provide a substantial speedup.
This kind of algorithm can be easily integrated with many other graphics techniques, such as the Phong reflection model or the Z-buffer algorithm.
The usual method starts with edges of projected polygons inserted into buckets, one per scanline; the rasterizer maintains an active edge table (AET). Entries maintain sort links, X coordinates, gradients, and references to the polygons they bound. To rasterize the next scanline, the edges no longer relevant are removed; new edges from the current scanlines' Y-bucket are added, inserted sorted by X coordinate. The active edge table entries have X and other parameter information incremented. Active edge table entries are maintained in an X-sorted list, effecting a change when 2 edges cross.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
1ère année: bases nécessaires à la représentation informatique 2D (3D).
Passage d'un à plusieurs logiciels: compétence de choisir les outils adéquats en 2D et en 3D.
Mise en relation des outils de CAO
This course covers advanced 3D graphics techniques for realistic image synthesis. Students will learn how light interacts with objects in our world, and how to recreate these phenomena in a computer s
This course addresses the subject of moving images. It focuses on the field of 3D computer graphics and the animation of computer-generated images (CGI).
Computer graphics deals with generating s and art with the aid of computers. Today, computer graphics is a core technology in digital photography, film, video games, digital art, cell phone and computer displays, and many specialized applications. A great deal of specialized hardware and software has been developed, with the displays of most devices being driven by computer graphics hardware. It is a vast and recently developed area of computer science. The phrase was coined in 1960 by computer graphics researchers Verne Hudson and William Fetter of Boeing.
Texture mapping is a method for mapping a texture on a . Texture here can be high frequency detail, surface texture, or color. The original technique was pioneered by Edwin Catmull in 1974. Texture mapping originally referred to diffuse mapping, a method that simply mapped pixels from a texture to a 3D surface ("wrapping" the image around the object).
In computer graphics, rasterisation (British English) or rasterization (American English) is the task of taking an described in a vector graphics format (shapes) and converting it into a (a series of pixels, dots or lines, which, when displayed together, create the image which was represented via shapes). The rasterized image may then be displayed on a computer display, video display or printer, or stored in a bitmap file format.
Photometric stereo, a computer vision technique for estimating the 3D shape of objects through images captured under varying illumination conditions, has been a topic of research for nearly four decades. In its general formulation, photometric stereo is an ...
Physically based rendering methods can create photorealistic images by simulating the propagation and interaction of light in a virtual scene. Given a scene description including the shape of objects, participating media, material properties, etc., the sim ...
EPFL2022
, ,
This data package supports the publication 'Complexity of crack front geometry enhances toughness of brittle solids' by Xinyue Wei, Chenzhuo Li, Cían McCarthy, and John M. Kolinski Nature physics (2024) - https://doi.org/10.1038/s41567-024-02435-x DOI: 1 ...