**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Person# Merlin Eléazar Nimier-David

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related units

Loading

Courses taught by this person

Loading

Related research domains

Loading

Related publications

Loading

People doing similar research

Loading

People doing similar research (146)

Related research domains (4)

Algorithm

In mathematics and computer science, an algorithm (ˈælɡərɪðəm) is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algo

Automatic differentiation

In mathematics and computer algebra, automatic differentiation (auto-differentiation, autodiff, or AD), also called algorithmic differentiation, computational differentiation, is a set of techniques

Loss function

In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto

Courses taught by this person

No results

Related units (4)

Related publications (4)

Loading

Loading

Loading

Physically based rendering methods can create photorealistic images by simulating the propagation and interaction of light in a virtual scene. Given a scene description including the shape of objects, participating media, material properties, etc., the simulation computes an image representing the radiance reaching the sensor.This thesis, however, pursues the corresponding inverse problem: given observations such as pictures of a scene, we want to recover a plausible description of its components. Unfortunately, the rendering process is typically far too complex to invert analytically. We therefore turn to iterative gradient-based approximations of the inverse, that require efficiently estimating gradients of an objective function with respect to the scene parameters of interest.The first part of this thesis is dedicated to algorithms for gradient estimation. We consider the usage of automatic differentiation and examine the associated tradeoffs. Next, we introduce radiative backpropagation, an adjoint method that casts the gradient estimation problem into a modified light transport problem, unlocking vastly more efficient implementations. For the case of participating media, we propose differential ratio tracking, a sampling technique that addresses the bias and high variance found in existing gradient estimators.In the second part, we focus on the design of systems to support effective differentiable rendering research, including the efficient implementation of the algorithms above. We describe the architecture and features of Mitsuba 2, an open-source retargetable physically based renderer. Mitsuba 2 supports different representations of colors (RGB, spectral, polarized), computing platforms (scalar, CPU vectorized, GPU), and numerical precision, within a single codebase. Importantly, automatic differentiation can be applied throughout the system. Then, we extend this design by applying an automatic conversion from wavefront-style rendering to a megakernel-based approach, leveraging just-in-time compilation. We obtain a fast, flexible and memory-efficient framework for primal and differentiable physically based rendering.Finally, the third part showcases three applications of differentiable physically based rendering: caustic design, inverse volume rendering, and material & lighting estimation in real indoor scenes. In all cases, special care is taken to avoid sub-optimal local minima due to the ambiguous and non-convex nature of the reconstruction problems.

Wenzel Alban Jakob, Merlin Eléazar Nimier-David

A computer-implemented inverse rendering method is provided. The method comprises: computing an adjoint image by differentiating an objective function that evaluates the quality of a rendered image, image elements of the adjoint image encoding the sensitivity of spatially corresponding image elements of the rendered image with respect to the objective function; sampling the adjoint image at a respective sample position to determine a respective adjoint radiance value associated with the respective sample position; emitting the respective adjoint radiance value into a scene model characterised by scene parameters; determining an interaction location of a respective incident adjoint radiance value with a surface and/or volume of the scene model; determining a respective incident radiance value or an approximation thereof at the interaction location, the respective incident radiance value expressing the amount of incident radiance that travels from the surrounding environment towards the interaction location; and updating a scene parameter gradient by using at least the interaction location and the respective incident adjoint radiance value and the respective incident radiance value or its approximation at the interaction location.

2021Wenzel Alban Jakob, Merlin Eléazar Nimier-David, Sébastien Nicolas Speierer

Physically based differentiable rendering has recently evolved into a powerful tool for solving inverse problems involving light. Methods in this area perform a differentiable simulation of the physical process of light transport and scattering to estimate partial derivatives relating scene parameters to pixels in the rendered image. Together with gradient-based optimization, such algorithms have interesting applications in diverse disciplines, e.g., to improve the reconstruction of 3D scenes, while accounting for interreflection and transparency, or to design meta-materials with specified optical properties. The most versatile differentiable rendering algorithms rely on reverse-mode differentiation to compute all requested derivatives at once, enabling optimization of scene descriptions with millions of free parameters. However, a severe limitation of the reverse-mode approach is that it requires a detailed transcript of the computation that is subsequently replayed to back-propagate derivatives to the scene parameters. The transcript of typical renderings is extremely large, exceeding the available system memory by many orders of magnitude, hence current methods are limited to simple scenes rendered at low resolutions and sample counts. We introduce radiative backpropagation, a fundamentally different approach to differentiable rendering that does not require a transcript, greatly improving its scalability and efficiency. Our main insight is that reverse-mode propagation through a rendering algorithm can be interpreted as the solution of a continuous transport problem involving the partial derivative of radiance with respect to the optimization objective. This quantity is "emitted" by sensors, "scattered" by the scene, and eventually "received" by objects with differentiable parameters. Differentiable rendering then decomposes into two separate primal and adjoint simulation steps that scale to complex scenes rendered at high resolutions. We also investigated biased variants of this algorithm and find that they considerably improve both runtime and convergence speed. We showcase an efficient GPU implementation of radiative backpropagation and compare its performance arid the quality of its gradients to prior work.