The Scalable Computing Systems Laboratory at EPFL focuses on designing efficient large-scale distributed systems, including datacenters, edge computing, fully decentralized systems, and self-organizing systems. Their research interests cover system support for machine learning, federated learning systems, large-scale recommenders, graph-based systems, and privacy-aware recommendation systems. The lab addresses challenges in scaling systems to thousands or even millions of distributed entities, emphasizing scalable design, failure resilience, performance, and privacy-preservation. Recent projects include Epidemic Learning, DecentralizePy for decentralized learning, and FLEET for online federated learning. Ongoing student projects involve end-to-end auditing of decentralized learning, boosting decentralized learning with bandwidth pooling, and asynchronous decentralized learning.
Carmela González Troncoso, Bogdan Kulynych