Publication

Optimization methods for collaborative learning

Abstract

A traditional machine learning pipeline involves collecting massive amounts of data centrally on a server and training models to fit the data. However, increasing concerns about the privacy and security of user's data, combined with the sheer growth in the data sizes has incentivized looking beyond such traditional centralized approaches. Collaborative learning (which encompasses distributed, federated, and decentralized learning) proposes instead for a network of data holders to collaborate together to train models without transmitting any data. This new paradigm minimizes data exposure, but inherently faces some fundamental challenges. In this thesis, we bring to bear the framework of stochastic optimization to formalize and develop new algorithms for these challenges. This serves not only to develop novel solutions, but also to test the utility of the optimization lens in modern deep learning.We study three fundamental problems. Firstly, collaborative training replaces a one-time transmission of raw data with repeated rounds of communicating partially trained models. However, this quickly runs against bandwidth constraints when dealing with large models. We propose to solve this bandwidth constraint using compressed communication. Next, collaborative training leverages the computation power of the data holders directly. However, this is not as reliable as using a data center with only a subset of them available at any given time. Thus, we require new algorithms which can efficiently utilize unreliable local computation of the data holders. Finally, collaborative training allows any data holder to participate in the training process, without being able to inspect their data or local computation. This may potentially open the system to malicious or faulty agents who seek to derail the training. We develop algorithms with Byzantine robustness which are guaranteed to be resilient to such attackers.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.