Exact Diffusion for Distributed Optimization and Learning---Part II: Convergence Analysis
Related publications (52)
Graph Chatbot
Chat with Graph Search
Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.
DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.
Finding convergence rates for numerical optimization algorithms is an important task, because it gives a justification to their use in solving practical problems, while also providing a way to compare their efficiency. This is especially useful in an async ...
Basis adaptation in Homogeneous Chaos spaces rely on a suitable rotation of the underlying Gaussian germ. Several rotations have been proposed in the literature resulting in adaptations with different convergence properties. In this paper we present a new ...
This paper presents a closed-form approach to obstacle avoidance for multiple moving convex and star-shaped concave obstacles. The method takes inspiration in harmonic-potential fields. It inherits the convergence properties of harmonic potentials. We prov ...
The interest for distributed stochastic optimization has raised to train complex Machine Learning models with more data on distributed systems. Increasing the computation power speeds up the training but it faces a communication bottleneck between workers ...
In this article, we address the numerical solution of the Dirichlet problem for the three-dimensional elliptic Monge-Ampere equation using a least-squares/relaxation approach. The relaxation algorithm allows the decoupling of the differential operators fro ...
This work develops a distributed optimization strategy with guaranteed exact convergence for a broad class of left-stochastic combination policies. The resulting exact diffusion strategy is shown in Part II to have a wider stability range and superior conv ...
The analysis in Part I [1] revealed interesting properties for subgradient learning algorithms in the context of stochastic optimization. These algorithms are used when the risk functions are non-smooth or involve non-differentiable components. They have b ...
In this work, a distributed multi-agent optimization problem is studied where different subsets of agents are coupled with each other through affine constraints. Moreover, each agent is only aware of its own contribution to the constraints and only knows w ...
The identification of reaction kinetics represents the main challenge in building models for reaction systems. The identification task can be performed via either simultaneous model identification (SMI) or incremental model identification (IMI), the latter ...
The significant progress that has been made in recent years both in hardware implementations and in numerical computing has rendered real-time optimization-based control a viable option when it comes to advanced industrial applications. At the same time, t ...