Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Machine learning is currently shifting from a centralized paradigm to decentralized ones where machine learning models are trained collaboratively. In fully decentralized learning algorithms, data remains where it was produced, models are trained locally and only model parameters are exchanged among participating entities along an arbitrary network topology and aggregated over time until convergence. Not only this limits the cost of exchanging data but also exploits the growing capabilities of users' devices while mitigating privacy and confidentiality concerns. Such systems are significantly challenged by a potential high-level of heterogeneity both at the system level as participants may have differing capabilities of (e.g., computing power, memory and network connectivity) as well as data heterogeneity (a.k.a non- IIDness). The adoption of fully decentralized learning systems requires designing frugal systems that limit communication, energy and yet ensure convergences. Several avenues are promising from adapting the network topologies to compensate for data heterogeneity to exploiting the high levels of redundancy, both in data and computations, of ML algorithms to limit data and model sharing in such systems.
Athanasios Nenes, Paraskevi Georgakaki