Posez n’importe quelle question sur les cours, conférences, exercices, recherches, actualités, etc. de l’EPFL ou essayez les exemples de questions ci-dessous.
AVERTISSEMENT : Le chatbot Graph n'est pas programmé pour fournir des réponses explicites ou catégoriques à vos questions. Il transforme plutôt vos questions en demandes API qui sont distribuées aux différents services informatiques officiellement administrés par l'EPFL. Son but est uniquement de collecter et de recommander des références pertinentes à des contenus que vous pouvez explorer pour vous aider à répondre à vos questions.
Introduction of optimisation problems in which the objective function is black box or obtaining the gradient is infeasible, has recently raised interest in zeroth-order optimisation methods. As an example finding adversarial examples for Deep Learning mode ...
We study the effect of the stochastic gradient noise on the training of generative adversarial networks (GANs) and show that it can prevent the convergence of standard game optimization methods, while the batch version converges. We address this issue with ...
Mini-batch stochastic gradient descent (SGD) is state of the art in large scale distributed training. The scheme can reach a linear speedup with respect to the number of workers, but this is rarely seen in practice as the scheme often suffers from large ne ...
We give an answer to a question posed in Amorim et al. (ESAIM Math Model Numer Anal 49(1):19–37, 2015), which can loosely speaking, be formulated as follows: consider a family of continuity equations where the velocity depends on the solution via the convo ...
We prove exponential convergence to equilibrium for the Fredrickson-Andersen one-spin facilitated model on bounded degree graphs satisfying a subexponential, but larger than polynomial, growth condition. This was a classical conjecture related to non-attra ...
The Minnesota family of exchange-correlation (xc) functionals are among the most popular, accurate, and abundantly used functionals available to date. However, their use in plane-wave based first-principles MD has been limited by their sparse availability. ...
Several useful variance-reduced stochastic gradient algorithms, such as SVRG, SAGA, Finito, and SAG, have been proposed to minimize empirical risks with linear convergence properties to the exact minimizers. The existing convergence results assume uniform ...
We propose a data-driven artificial viscosity model for shock capturing in discontinuous Galerkin methods. The proposed model trains a multi-layer feedforward network to map from the element-wise solution to a smoothness indicator, based on which the artif ...
We present new results concerning the approximation of the total variation, ∫Ω∣∇u∣, of a function u by non-local, non-convex functionals of the form $$ \Lambda_\delta u = \int_{\Omega} \int_{\Omega} \frac{\delta \varphi \big( |u(x) - ...
In this article, we address the numerical solution of the Dirichlet problem for the three-dimensional elliptic Monge-Ampere equation using a least-squares/relaxation approach. The relaxation algorithm allows the decoupling of the differential operators fro ...