Universal and adaptive methods for robust stochastic optimization
Graph Chatbot
Chat with Graph Search
Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.
DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.
Variance-reduced gradient estimators for policy gradient methods have been one of the main focus of research in the reinforcement learning in recent years as they allow acceleration of the estimation process. We propose a variance-reduced policy-gradient m ...
Organocatalysis has evolved significantly over the last decades, becoming a pillar of synthetic chemistry, but traditional theoretical approaches based on quantum mechanical computations to investigate reaction mechanisms and provide rationalizations of ca ...
Recently there has been a surge of interest in understanding implicit regularization properties of iterative gradient-based optimization algorithms. In this paper, we study the statistical guarantees on the excess risk achieved by early-stopped unconstrain ...
Many robotics problems are formulated as optimization problems. However, most optimization solvers in robotics are locally optimal and the performance depends a lot on the initial guess. For challenging problems, the solver will often get stuck at poor loc ...
EPFL2022
, ,
Reaction optimization is challenging and traditionally delegated to domain experts who iteratively pro-pose increasingly optimal experiments. Problematically, the reaction landscape is complex and often requires hundreds of experiments to reach convergence ...
Bern2023
One of the main goal of Artificial Intelligence is to develop models capable of providing valuable predictions in real-world environments. In particular, Machine Learning (ML) seeks to design such models by learning from examples coming from this same envi ...
EPFL2022
, ,
This work proposes a universal and adaptive second-order method for minimizing second-order smooth, convex functions. Our algorithm achieves O(σ/T‾‾√) convergence when the oracle feedback is stochastic with variance σ2, and improves its convergence to O(1/ ...
2022
, , , ,
We propose an adaptive variance-reduction method, called AdaSpider, for minimization of L-smooth, non-convex functions with a finite-sum structure. In essence, AdaSpider combines an AdaGrad-inspired [Duchi et al., 2011, McMahan & Streeter, 2010], but a fai ...
In this paper, we analyze the recently proposed stochastic primal-dual hybrid gradient (SPDHG) algorithm and provide new theoretical results. In particular, we prove almost sure convergence of the iterates to a solution with convexity and linear convergenc ...
We study the performance of Stochastic Cubic Regularized Newton (SCRN) on a class of functions satisfying gradient dominance property with 1≤α≤2 which holds in a wide range of applications in machine learning and signal processing. This conditio ...