Related publications (30)

Critical Parameters for Scalable Distributed Learning with Large Batches and Asynchronous Updates

Martin Jaggi, Sebastian Urban Stich, Amirkeivan Mohtashami

It has been experimentally observed that the efficiency of distributed training with stochastic gradient (SGD) depends decisively on the batch size and—in asynchronous implementations—on the gradient staleness. Especially, it has been observed that the spe ...
2021

A first-order primal-dual method with adaptivity to local smoothness

Volkan Cevher, Maria-Luiza Vladarean

We consider the problem of finding a saddle point for the convex-concave objective minxmaxyf(x)+Ax,yg(y)\min_x \max_y f(x) + \langle Ax, y\rangle - g^*(y), where ff is a convex function with locally Lipschitz gradient and gg is convex and possibly non-smooth. We propose an ...
2021

System-level, Input-output and New Parameterizations of Stabilizing Controllers, and Their Numerical Computation

Maryam Kamgarpour, Luca Furieri, Na Li

It is known that the set of internally stabilizing controller Cstab\mathcal{C}_{\text{stab}} is non-convex, but it admits convex characterizations using certain closed-loop maps: a classical result is the {Youla parameterization}, and two recent notions are t ...
2020

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.