Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.
DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.
Introduction of optimisation problems in which the objective function is black box or obtaining the gradient is infeasible, has recently raised interest in zeroth-order optimisation methods. As an example finding adversarial examples for Deep Learning mode ...
We present new results concerning the approximation of the total variation, ∫Ω∣∇u∣, of a function u by non-local, non-convex functionals of the form $$ \Lambda_\delta u = \int_{\Omega} \int_{\Omega} \frac{\delta \varphi \big( |u(x) - ...
In this article, we address the numerical solution of the Dirichlet problem for the three-dimensional elliptic Monge-Ampere equation using a least-squares/relaxation approach. The relaxation algorithm allows the decoupling of the differential operators fro ...
We prove exponential convergence to equilibrium for the Fredrickson-Andersen one-spin facilitated model on bounded degree graphs satisfying a subexponential, but larger than polynomial, growth condition. This was a classical conjecture related to non-attra ...
The Minnesota family of exchange-correlation (xc) functionals are among the most popular, accurate, and abundantly used functionals available to date. However, their use in plane-wave based first-principles MD has been limited by their sparse availability. ...
Mini-batch stochastic gradient descent (SGD) is state of the art in large scale distributed training. The scheme can reach a linear speedup with respect to the number of workers, but this is rarely seen in practice as the scheme often suffers from large ne ...
Several useful variance-reduced stochastic gradient algorithms, such as SVRG, SAGA, Finito, and SAG, have been proposed to minimize empirical risks with linear convergence properties to the exact minimizers. The existing convergence results assume uniform ...
We give an answer to a question posed in Amorim et al. (ESAIM Math Model Numer Anal 49(1):19–37, 2015), which can loosely speaking, be formulated as follows: consider a family of continuity equations where the velocity depends on the solution via the convo ...
We study the effect of the stochastic gradient noise on the training of generative adversarial networks (GANs) and show that it can prevent the convergence of standard game optimization methods, while the batch version converges. We address this issue with ...
We propose a data-driven artificial viscosity model for shock capturing in discontinuous Galerkin methods. The proposed model trains a multi-layer feedforward network to map from the element-wise solution to a smoothness indicator, based on which the artif ...