Stochastic distributed learning with gradient quantization and double-variance reduction
Graph Chatbot
Chat with Graph Search
Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.
DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.
In recent years, learning-based image compression has demonstrated similar or superior performance when com- pared to conventional approaches in terms of compression efficiency and visual quality. Typically, learning-based image compression takes advantage ...
It has been experimentally observed that the efficiency of distributed training with stochastic gradient (SGD) depends decisively on the batch size and—in asynchronous implementations—on the gradient staleness. Especially, it has been observed that the spe ...
We consider the problem of finding a saddle point for the convex-concave objective minxmaxyf(x)+⟨Ax,y⟩−g∗(y), where f is a convex function with locally Lipschitz gradient and g is convex and possibly non-smooth. We propose an ...
Omnidirectional images are the spherical visual signals that provide a wide, 360◦, view of a scene from a specific position. Such images are becoming increasingly popular in fields like virtual reality and robotics. Compared to conventional 2D images, the ...
Combining diffusion strategies with complementary properties enables enhanced performance when they can be run simultaneously. In this article, we first propose two schemes for the convex combination of two diffusion strategies, namely, the power-normalize ...
The rapid development of digital imaging and video has placed visual contents in the heart of our lives. Digital multimedia span a vast number of areas from business to leisure, including but not limited to education, medicine, accessibility, training, adv ...
We introduce a generic two-loop scheme for smooth minimax optimization with strongly-convex-concave objectives. Our approach applies the accelerated proximal point framework (or Catalyst) to the associated dual problem and takes full advantage of existing ...
We introduce a generic \emph{two-loop} scheme for smooth minimax optimization with strongly-convex-concave objectives. Our approach applies the accelerated proximal point framework (or Catalyst) to the associated \emph{dual problem} and takes full advantag ...
In this paper, we analyze the recently proposed stochastic primal-dual hybrid gradient (SPDHG) algorithm and provide new theoretical results. In particular, we prove almost sure convergence of the iterates to a solution and linear convergence with standard ...
This work studies multi-agent sharing optimization problems with the objective function being the sum of smooth local functions plus a convex (possibly non-smooth) function coupling all agents. This scenario arises in many machine learning and engineering ...