Computational photography refers to digital image capture and processing techniques that use digital computation instead of optical processes. Computational photography can improve the capabilities of a camera, or introduce features that were not possible at all with film based photography, or reduce the cost or size of camera elements. Examples of computational photography include in-camera computation of digital panoramas, high-dynamic-range images, and light field cameras. Light field cameras use novel optical elements to capture three dimensional scene information which can then be used to produce 3D images, enhanced depth-of-field, and selective de-focusing (or "post focus"). Enhanced depth-of-field reduces the need for mechanical focusing systems. All of these features use computational imaging techniques.
The definition of computational photography has evolved to cover a number of
subject areas in computer graphics, computer vision, and applied
optics. These areas are given below, organized according to a taxonomy
proposed by Shree K. Nayar. Within each area is a list of techniques, and for
each technique one or two representative papers or books are cited.
Deliberately omitted from the
taxonomy are (see also )
techniques applied to traditionally captured
images in order to produce better images. Examples of such techniques are
dynamic range compression (i.e. tone mapping),
color management, image completion (a.k.a. inpainting or hole filling),
digital watermarking, and artistic image effects.
Also omitted are techniques that produce range data,
volume data, 3D models, 4D light fields,
4D, 6D, or 8D BRDFs, or other high-dimensional image-based representations. Epsilon photography is a sub-field of computational photography.
Photos taken using computational photography can allow amateurs to produce photographs rivalling the quality of professional photographers, but currently (2019) do not outperform the use of professional-level equipment.
This is controlling photographic illumination in a structured fashion, then processing the captured images,
to create new images.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Visual computing and machine learning are characterized by their reliance on numerical algorithms to process large amounts of information such as images, shapes, and 3D volumes. This course will famil
The students will gain the theoretical knowledge in computational photography, which allows recording and processing a richer visual experience than traditional digital imaging. They will also execute
This summer school is an hands-on introduction on the fundamentals of image analysis for scientists. A series of lectures provide students with the key concepts in the field, and are followed by pract
In photography and videography, multi-exposure HDR capture is a technique that creates extended or high dynamic range (HDR) images by taking and combining multiple exposures of the same subject matter at different exposure levels. Combining multiple images in this way results in an image with a greater dynamic range than what would be possible by taking one single image. The technique can also be used to capture video by taking and combining multiple exposures for each frame of the video.
Computer vision tasks include methods for , , and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the forms of decisions. Understanding in this context means the transformation of visual images (the input to the retina in the human analog) into descriptions of the world that make sense to thought processes and can elicit appropriate action.
Megapixel single-photon avalanche diode (SPAD) arrays have been developed recently, opening up the possibility of deploying SPADs as general-purpose passive cameras for photography and computer vision. However, most previous work on SPADs has been limited ...
Convolutional neural networks (CNNs) have been demonstrated to be highly effective in the field of pulmonary nodule detection. However, existing CNN based pulmonary nodule detection methods lack the ability to capture long-range dependencies, which is vita ...
We present GeoNeRF, a generalizable photorealistic novel view synthesis method based on neural radiance fields. Our approach consists of two main stages: a geometry reasoner and a renderer. To render a novel view, the geometry reasoner first constructs cas ...