Exact Diffusion for Distributed Optimization and Learning-Part I: Algorithm Development
Related publications (32)
Graph Chatbot
Chat with Graph Search
Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.
DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.
The research community has been making significant progress in hardware implementation, numerical computing and algorithm development for optimization-based control. However, there are two key challenges that still have to be overcome for optimization-base ...
This article studies a class of nonsmooth decentralized multiagent optimization problems where the agents aim at minimizing a sum of local strongly-convex smooth components plus a common nonsmooth term. We propose a general primal-dual algorithmic framewor ...
Multiscale problems, such as modelling flows through porous media or predicting the mechanical properties of composite materials, are of great interest in many scientific areas. Analytical models describing these phenomena are rarely available, and one mus ...
Partial discharge (PD) occurrence in power transformers can lead to irreparable damage to the power network. In this paper, the inverse filter (IF) method to localize PDs in power transformers is proposed. To the best of the authors’ knowledge, this is the ...
The analysis in Part I [1] revealed interesting properties for subgradient learning algorithms in the context of stochastic optimization. These algorithms are used when the risk functions are non-smooth or involve non-differentiable components. They have b ...
The Dirichlet-Neumann (DN) method has been extensively studied for linear partial differential equations, while little attention has been devoted to the nonlinear case. In this paper, we analyze the DN method both as a nonlinear iterative method and as a p ...
The interest for distributed stochastic optimization has raised to train complex Machine Learning models with more data on distributed systems. Increasing the computation power speeds up the training but it faces a communication bottleneck between workers ...
Part I of this paper developed the exact diffusion algorithm to remove the bias that is characteristic of distributed solutions for deterministic optimization problems. The algorithm was shown to be applicable to the larger set of locally balanced left-sto ...
We propose a stochastic conditional gradient method (CGM) for minimizing convex finitesum objectives formed as a sum of smooth and non-smooth terms. Existing CGM variants for this template either suffer from slow convergence rates, or require carefully inc ...
For a high dimensional problem, a randomized Gram-Schmidt (RGS) algorithm is beneficial in computational costs as well as numerical stability. We apply this dimension reduction technique by random sketching to Krylov subspace methods, e.g. to the generaliz ...