**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Publication# Reparameterizing Discontinuous Integrands for Differentiable Rendering

Résumé

Differentiable rendering has recently opened the door to a number of challenging inverse problems involving photorealistic images, such as computational material design and scattering-aware reconstruction of geometry and materials from photographs. Differentiable rendering algorithms strive to estimate partial derivatives of pixels in a rendered image with respect to scene parameters, which is difficult because visibility changes are inherently non-differentiable. We propose a new technique for differentiating path-traced images with respect to scene parameters that affect visibility, including the position of cameras, light sources, and vertices in triangle meshes. Our algorithm computes the gradients of illumination integrals by applying changes of variables that remove or strongly reduce the dependence of the position of discontinuities on differentiable scene parameters. The underlying parameterization is created on the fly for each integral and enables accurate gradient estimates using standard Monte Carlo sampling in conjunction with automatic differentiation. Importantly, our approach does not rely on sampling silhouette edges, which has been a bottleneck in previous work and tends to produce high-variance gradients when important edges are found with insufficient probability in scenes with complex visibility and high-resolution geometry. We show that our method only requires a few samples to produce gradients with low bias and variance for challenging cases such as glossy reflections and shadows. Finally, we use our differentiable path tracer to reconstruct the 3D geometry and materials of several real-world objects from a set of reference photographs.

Official source

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Concepts associés

Chargement

Publications associées

Chargement

Concepts associés (17)

Méthode de Monte-Carlo

Une méthode de Monte-Carlo, ou méthode Monte-Carlo, est une méthode algorithmique visant à calculer une valeur numérique approchée en utilisant des procédés aléatoires, c'est-à-dire des techniques p

Variance (mathématiques)

vignette|Exemple d'échantillons pour deux populations ayant la même moyenne mais des variances différentes. La population en rouge a une moyenne de 100 et une variance de 100 (écart-type = SD = standa

Géométrie

La géométrie est à l'origine la branche des mathématiques étudiant les figures du plan et de l'espace (géométrie euclidienne). Depuis la fin du , la géométrie étudie également les figures appartenan

Publications associées (3)

Chargement

Chargement

Chargement

Wenzel Alban Jakob, Merlin Eléazar Nimier-David, Sébastien Nicolas Speierer

Physically based differentiable rendering has recently evolved into a powerful tool for solving inverse problems involving light. Methods in this area perform a differentiable simulation of the physical process of light transport and scattering to estimate partial derivatives relating scene parameters to pixels in the rendered image. Together with gradient-based optimization, such algorithms have interesting applications in diverse disciplines, e.g., to improve the reconstruction of 3D scenes, while accounting for interreflection and transparency, or to design meta-materials with specified optical properties. The most versatile differentiable rendering algorithms rely on reverse-mode differentiation to compute all requested derivatives at once, enabling optimization of scene descriptions with millions of free parameters. However, a severe limitation of the reverse-mode approach is that it requires a detailed transcript of the computation that is subsequently replayed to back-propagate derivatives to the scene parameters. The transcript of typical renderings is extremely large, exceeding the available system memory by many orders of magnitude, hence current methods are limited to simple scenes rendered at low resolutions and sample counts. We introduce radiative backpropagation, a fundamentally different approach to differentiable rendering that does not require a transcript, greatly improving its scalability and efficiency. Our main insight is that reverse-mode propagation through a rendering algorithm can be interpreted as the solution of a continuous transport problem involving the partial derivative of radiance with respect to the optimization objective. This quantity is "emitted" by sensors, "scattered" by the scene, and eventually "received" by objects with differentiable parameters. Differentiable rendering then decomposes into two separate primal and adjoint simulation steps that scale to complex scenes rendered at high resolutions. We also investigated biased variants of this algorithm and find that they considerably improve both runtime and convergence speed. We showcase an efficient GPU implementation of radiative backpropagation and compare its performance arid the quality of its gradients to prior work.

Photorealistic images created using physical simulations of light have become a ubiquitous element of our everyday lives. The most successful techniques for producing such images replicate the key physical phenomena in a detailed software simulation, including the emission of light by sources, transport through space, and scattering in the atmosphere and at the surfaces of objects. Mathematically, this computation involves the approximation of many highdimensional integrals, one for each pixel of the image, usually using Monte Carlo methods. Although a great deal of progress has been made on rendering algorithms, so that physically based rendering is now routinely used in many applications, commonly occurring situations can still cause these algorithms to become impractically slow, forcing users to make unrealistic scene modifications to obtain satisfactory results. Light transport is complex because light can flow along a great variety of different paths through a scene, though only a subset of these makes relevant contributes to the final image. The simulation becomes ineffective when it is difficult to find the important paths. Commonly occurring materials like smooth metal or glass surfaces can easily lead to such situations, where only very few lighting paths participate, leading to spiky integrands and poor convergence. How to efficiently handle such cases in general has been a long-standing problem. In this paper, we provide a geometric solution to this problem by representing light paths as points in an abstract high-dimensional configuration space that is defined by a system of constraint equations. This configuration space is a differentiable manifold, which can be locally parameterized in the neighborhood of an existing path. Building on this framework, we propose Manifold Exploration, a rendering technique that efficiently explores the integration domain by taking geometrically informed steps on the manifold of light paths.

In the past few years, fusing NIR and color images has been explored in general computational photography and computer vision tasks, where traditionally only color images are used. The additional information provided by the differences of light and scene reflections in the visible and NIR bands of the electromagnetic spectrum is used in several applications such as image denoising, image dehazing, shadow detection and removal, and high dynamic-range imaging. In this thesis, we study a system that simultaneously captures color and NIR images on a single silicon sensor. Such a camera could be manufactured with minor changes in the hardware of consumer color cameras and it could be integrated inside small devices such as cell phones. We address two main challenges in color and NIR acquisition. First, we study the spatial and spectral sampling of the scene, which is inevitable in single-sensor acquisition of multiple spectral channels. We then focus on chromatic aberration distortions. Similarly to color imaging, we use a color filter array (CFA) to sample the scene in visible and NIR bands. We address two main challenges regarding the CFA: (1) designing the CFA, and (2) developing a demosaicing algorithm to reconstruct full-resolution images from the subsampled measurements. We consider a general CFA with filters that transmit different mixtures of color and NIR channels. We develop a framework that, by exploiting the spatial and spectral correlations of color and NIR images, computes the transmittance of each filter and the demosaicing matrix. Our optimized CFA and demosaicing outperform other solutions developed for single-sensor color and NIR acquisition. We also investigate a CFA that is formed by one blue, one green, one red, and one NIR-pass filter. We call it the RGBN CFA and assume that, similarly to color cameras, it uses dye filters that do not have sharp cut-offs. Hence, color and NIR radiations leak into NIR and color filters, respectively. We devise an algorithm that reconstructs full-resolution images from mixed and subsampled sensor measurements. The RGBN CFA and our reconstruction algorithm perform as well as or even better than other single-sensor acquisition techniques that use more complicated hardware components. The problem of chromatic aberration is caused by deficiencies of optical elements. A simple lens converges light rays with different wavelengths at different distances from the lens. Hence, if the color image is in focus and sharp on the sensor plane, the NIR image captured with the same focus settings is out of focus and blurred. We propose an algorithm that retrieves the lost details in NIR using the gradients of the sharp color image. As the high-frequency details of color and NIR images are not strongly correlated in all image patches, our method locally adapts the contribution of color gradients in deblurring. To achieve this, we develop a multiscale scheme that iterates between deblurring NIR and estimating the correlation between color and NIR high-frequency components. Our algorithm outperforms both blind and guided deblurring approaches. We also design a method that estimates a dense blur-kernel map when the severity of chromatic aberration changes as the depths of objects vary across the image. Our method performs better than the competing methods both in estimating the blur-kernel map and in deblurring.