Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?
Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur Graph Search.
In distributed optimization and machine learning, multiple nodes coordinate to solve large problems. To do this, the nodes need to compress important algorithm information to bits so that it can be communicated over a digital channel. The communication time of these algorithms follows a complex interplay between a) the algorithm's convergence properties, b) the compression scheme, and c) the transmission rate offered by the digital channel. We explore these relationships for a general class of linearly convergent distributed algorithms. In particular, we illustrate how to design quantizers for these algorithms that compress the communicated information to a few bits while still preserving the linear convergence. Moreover, we characterize the communication time of these algorithms as a function of the available transmission rate. We illustrate our results on learning algorithms using different communication structures, such as decentralized algorithms where a single master coordinates information from many workers and fully distributed algorithms where only neighbours in a communication graph can communicate. We conclude that a co-design of machine learning and communication protocols are mandatory to flourish machine learning over networks.
Colin Neil Jones, Yuning Jiang, Yingzhao Lian, Xinliang Dai
Dario Floreano, Laurent Keller