Regularity results for very degenerate elliptic equations
Graph Chatbot
Chat with Graph Search
Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.
DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.
We present a strikingly simple proof that two rules are sufficient to automate gradient descent: 1) don’t increase the stepsize too fast and 2) don’t overstep the local curvature. No need for functional values, no line search, no information about the func ...
Combining diffusion strategies with complementary properties enables enhanced performance when they can be run simultaneously. In this article, we first propose two schemes for the convex combination of two diffusion strategies, namely, the power-normalize ...
Recently, we have applied the generalized Littlewood theorem concerning contour integrals of the logarithm of the analytical function to find the sums over inverse powers of zeros for the incomplete gamma and Riemann zeta functions, polygamma functions, an ...
We develop a primal-dual convex minimization framework to solve a class of stochastic convex three-composite problem with a linear operator. We consider the cases where the problem is both convex and strongly convex and analyze the convergence of the propo ...
The problem of allocating the closed-loop poles of linear systems in specific regions of the complex plane defined by discrete time-domain requirements is addressed. The resulting non-convex set is inner-approximated by a convex region described with linea ...
It has been experimentally observed that the efficiency of distributed training with stochastic gradient (SGD) depends decisively on the batch size and—in asynchronous implementations—on the gradient staleness. Especially, it has been observed that the spe ...
Dispersion of A1 Lamb mode near onset frequency is discussed. This mode is close to that used in XBAR devices. The typically used approximation is based on elliptic form of slowness curve and ignoring mode interaction is not valid for widely used materials ...
This work studies multi-agent sharing optimization problems with the objective function being the sum of smooth local functions plus a convex (possibly non-smooth) function coupling all agents. This scenario arises in many machine learning and engineering ...
We present new results concerning the approximation of the total variation, integral(Omega)vertical bar del u vertical bar, of a function u by non-local, non-convex functionals of the form Lambda delta(u) = integral(Omega)integral(Omega)delta phi(vertical ...
We consider the problem of finding a saddle point for the convex-concave objective minxmaxyf(x)+⟨Ax,y⟩−g∗(y), where f is a convex function with locally Lipschitz gradient and g is convex and possibly non-smooth. We propose an ...