Impact of Redundancy on Resilience in Distributed Optimization and Learning
Related publications (49)
Graph Chatbot
Chat with Graph Search
Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.
DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.
We consider the problem of making a multi-agent system (MAS) resilient to Byzantine failures through replication. We consider a very general model of MAS, where randomness can be involved in the behavior of each agent. We propose the first universal scheme ...
We examine the problem of regret minimization when the learner is involved in a continuous game with other optimizing agents: in this case, if all players follow a no-regret algorithm, it is possible to achieve significantly lower regret relative to fully ...
This paper considers the Byzantine fault-tolerance problem in distributed stochastic gradient descent (D-SGD) method - a popular algorithm for distributed multi-agent machine learning. In this problem, each agent samples data points independently from a ce ...
The ever-growing number of edge devices (e.g., smartphones) and the exploding volume of sensitive data they produce, call for distributed machine learning techniques that are privacy-preserving. Given the increasing computing capabilities of modern edge de ...
Blockchains have captured the attention of many, resulting in an abundance of new systems available for use. However, selecting an appropriate blockchain for an application is challenging due to the lack of comparative information discussing core metrics s ...
This work proposes a novel strategy for social learning by introducing the critical feature of adaptation. In social learning, several distributed agents update continually their belief about a phenomenon of interest through: i) direct observation of strea ...
Meta-learning aims to improve efficiency of learning new tasks by exploiting the inductive biases obtained from related tasks. Previous works consider centralized or federated architectures that rely on central processors, whereas, in this paper, we propos ...
EUROPEAN ASSOC SIGNAL SPEECH & IMAGE PROCESSING-EURASIP2021
This work presents and studies a distributed algorithm for solving optimization problems over networks where agents have individual costs to minimize subject to subspace constraints that require the minimizers across the network to lie in a low-dimensional ...
The central task in many interactive machine learning systems can be formalized as the sequential optimization of a black-box function. Bayesian optimization (BO) is a powerful model-based framework for \emph{adaptive} experimentation, where the primary go ...
Whether it occurs in artificial or biological substrates, {\it learning} is a {distributed} phenomenon in at least two aspects.
First, meaningful data and experiences are rarely found in one location, hence {\it learners} have a strong incentive to work t ...