**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Light field

Summary

The light field is a vector function that describes the amount of light flowing in every direction through every point in space. The space of all possible light rays is given by the five-dimensional plenoptic function, and the magnitude of each ray is given by its radiance. Michael Faraday was the first to propose that light should be interpreted as a field, much like the magnetic fields on which he had been working. The phrase light field was coined by Andrey Gershun in a classic 1936 paper on the radiometric properties of light in three-dimensional space.
Modern approaches to light-field display explore co-designs of optical elements and compressive computation to achieve higher resolutions, increased contrast, wider fields of view, and other benefits.
The term “radiance field” may also be used to refer to similar concepts. The term is used in modern research such as neural radiance fields.
For geometric optics—i.e., to incoherent light and to objects larger than the wavelength of light—the fundamental carrier of light is a ray. The measure for the amount of light traveling along a ray is radiance, denoted by L and measured in , i.e., watts (W) per steradian (sr) per meter squared (m2). The steradian is a measure of solid angle, and meters squared are used as a measure of cross-sectional area, as shown at right.
The radiance along all such rays in a region of three-dimensional space illuminated by an unchanging arrangement of lights is called the plenoptic function. The plenoptic illumination function is an idealized function used in computer vision and computer graphics to express the image of a scene from any possible viewing position at any viewing angle at any point in time. It is not used in practice computationally, but is conceptually useful in understanding other concepts in vision and graphics. Since rays in space can be parameterized by three coordinates, x, y, and z and two angles θ and φ, as shown at left, it is a five-dimensional function, that is, a function over a five-dimensional manifold equivalent to the product of 3D Euclidean space and the 2-sphere.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications (21)

Related people (18)

Related concepts (8)

Related courses (3)

Related units (10)

Computer graphics deals with generating s and art with the aid of computers. Today, computer graphics is a core technology in digital photography, film, video games, digital art, cell phone and computer displays, and many specialized applications. A great deal of specialized hardware and software has been developed, with the displays of most devices being driven by computer graphics hardware. It is a vast and recently developed area of computer science. The phrase was coined in 1960 by computer graphics researchers Verne Hudson and William Fetter of Boeing.

The light field is a vector function that describes the amount of light flowing in every direction through every point in space. The space of all possible light rays is given by the five-dimensional plenoptic function, and the magnitude of each ray is given by its radiance. Michael Faraday was the first to propose that light should be interpreted as a field, much like the magnetic fields on which he had been working. The phrase light field was coined by Andrey Gershun in a classic 1936 paper on the radiometric properties of light in three-dimensional space.

A 3D display is a display device capable of conveying depth to the viewer. Many 3D displays are stereoscopic displays, which produce a basic 3D effect by means of stereopsis, but can cause eye strain and visual fatigue. Newer 3D displays such as holographic and light field displays produce a more realistic 3D effect by combining stereopsis and accurate focal length for the displayed content. Newer 3D displays in this manner cause less visual fatigue than classical stereoscopic displays.

CS-413: Computational photography

The students will gain the theoretical knowledge in computational photography, which allows recording and processing a richer visual experience than traditional digital imaging. They will also execute

CH-448: Photomedicine

The most important clinical diagnostic and therapeutic applications of light will be described. In addition, this course will address the principles governing the interactions between light and biolog

PHYS-613: Nonlinear Spectroscopy

To provide an introduction into the field of nonlinear spectroscopy, and focus in particular on linear and nonlinear light scattering

With the development of quantum optics, photon correlations acquired a prominent role as a tool to test our understanding of physics, and played a key role in verifying the validity of quantum mechanics. The spatial and temporal correlations in a light field also reveal information about its origin, and allow us to probe the nature of the physical systems interacting with it. Additionally, with the advent of quantum technologies, they have acquired technological relevance, as they are expected to play an important role in quantum communication and quantum information processing.This thesis develops techniques that combine spontaneous Raman scattering with Time Correlated Single Photon Counting, and uses them to study the quantum mechanical nature of high frequency vibrations in crystals and molecules. We demonstrate photon bunching in the Stokes and anti-Stokes fields scattered from two ultrafast laser pulses, and use their cross-correlation to measure the 3.9 ps decay time of the optical phonon in diamond. We then employ this method to measure molecular vibrations in CS2, where we are able to excite the respective vibrational modes of the two isotopic species present in the sample in a coherent superposition, and observe quantum beating between the two signals. Stokes scattering, when combined with a projective measurement, leads to a well defined quantum state. We demonstrate this by measuring the second order correlation function of the anti-Stokes field conditional on detecting one or more photons in the Stokes field, which allows us to observe a phonon modeâs transition form a thermal state into the first excited Fock state, and measure its decay over the characteristic phonon lifetime. Finally, we use this technique to prepare a highly entangled photon-phonon state, which violates a Bell-type inequality. We measure S = 2.360 Â± 0.025, violating the CHSH inequality, compatible with the non-locality of the state.The techniques we developed open the door to the study of a broad range of physical systems, where spectroscopic information is obtained with the preparation of specific quantum states. They also hold potential for future technological use, and promote vibrational Raman scattering to a resource in nonlinear quantum optics -- where it used to be considered as a source of noise instead.

As of today, the extension of the human visual capabilities to machines remains both a cornerstone and an open challenge in the attempt to develop intelligent systems. On the one hand, the development of more and more sophisticated imaging devices, capable of sensing richer information than a plain perspective projection of the real world, is critical in order to allow machines to understand the complex environment around them. On the other hand, despite the advances in imaging, the complexity of the real world cannot be fully captured by a single imaging device, either due to intrinsic hardware limitations or to the environment complexity itself. As a consequence, the attempt to extend the human visual capabilities to machines requires inevitably to estimate some unknown quantities, which could not be measured, from the available captured data. Equivalently, imaging requires the solution of arbitrarily complex inverse problems.
In most scenarios, inverse problems are ill-posed and admit an infinite number of solutions, while only one or few of them are the desired ones. It becomes therefore crucial to reduce, equivalently \textit{to regularize}, the solution space by exploiting all the available prior information about the problem structure and, especially, about the target quantity to estimate. In this thesis we investigate the use of graph-based regularizers to encode our prior knowledge about the target quantity and to inject it directly into the inverse problem. In particular, we cast the inverse problem into an optimization task, where the target quantity is modelled as a graph whose topology captures our prior knowledge. In order to show the effectiveness and the flexibility of graph-based regularizers, we study their use in different inverse imaging problems, each one characterized by different geometrical constraints.
We start by investigating how to augment the resolution of a light field. In fact, although light field cameras permit to capture the 3D information in a scene within a single exposure, thus providing much richer information than a perspective camera, their compact design limits their spatial resolution dramatically. We present a smooth graph-based regularizer which models the geometry of a light field explicitly and we use it to augment the light field spatial resolution while relying only on the complementary information encoded in the low resolution light field views. In particular, we show that the use of a graph-based regularizer permits to enforce the light field geometric structure without the need for a precise and costly disparity estimation step.
Then we analyze the further benefits provided by adopting nonsmooth graph-based regularizers, as these better preserve edges and fine details than their smooth counterpart. In particular, we focus on a specific nonsmooth graph-based regularizer and show its effectiveness within two applications. The first application revolves again around light field super-resolution, which permits a comparison with the smooth regularizer adopted previously. The second applications is disparity estimation in omnidirectional stereo systems, where the two captured images and the target disparity map live on a spherical surface, hence a graph-based regularizer can be used to model the non trivial correlation underneath each signal.
Finally, we investigate the refinement of a depth map and the estimation of the corresponding normal map. In fact, ...

Toralf Scharf, Maryam Yousefi, Daniel Necesal

In this Letter, we present the photonic nanojet as a phenomenon in a structured light generator system that is implemented to modify the source focal spot size and emission angle. The optical system comprises a microlens array that is illuminated by a focused Gaussian beam to generate a structured pattern in the far field. By introducing a spheroid with different aspect ratios in the focus of the Gaussian beam, the source optical characteristics change, and a photonic nanojet is generated, which will engineer the far-field distribution. To probe the light fields, we implement a high-resolution interferometry setup to extract both the phase and intensity at different planes. We both numerically and experimentally demonstrate that the pattern distribution in the far field can be engineered by a photonic nanojet. As an example, we examine prolate, sphere, and oblate geometries. An interesting finding is that depending on the spheroid geometry, a smaller transverse FWHM of a photonic nanojet with a higher divergence angle produces an increased pattern field of view at the same physical size of the optical system. (C) 2021 Optical Society of America

Related lectures (6)

Shadows and Projections: Geometric Descriptive

Covers shadow projection of objects on different planes and how to determine if a face is illuminated or shaded.

Perception and Action: Biological MotionCS-444: Virtual reality

Delves into the perception of biological motion and its implications on human perception and animation quality.

Space Trusses & Frames

Covers space trusses, frames, and machines, emphasizing equilibrium analysis and structural design principles.