Unit

Scalable Computing Systems

Laboratory
Summary

The Scalable Computing Systems Laboratory at EPFL focuses on designing efficient large-scale distributed systems, including datacenters, edge computing, fully decentralized systems, and self-organizing systems. Their research interests cover system support for machine learning, federated learning systems, large-scale recommenders, graph-based systems, and privacy-aware recommendation systems. The lab addresses challenges in scaling systems to thousands or even millions of distributed entities, emphasizing scalable design, failure resilience, performance, and privacy-preservation. Recent projects include Epidemic Learning, DecentralizePy for decentralized learning, and FLEET for online federated learning. Ongoing student projects involve end-to-end auditing of decentralized learning, boosting decentralized learning with bandwidth pooling, and asynchronous decentralized learning.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related publications (32)

Privacy-Preserving Machine Learning on Graphs

Sina Sajadmanesh

Graph Neural Networks (GNNs) have emerged as a powerful tool for learning on graphs, demonstrating exceptional performance in various domains. However, as GNNs become increasingly popular, new challenges arise. One of the most pressing is the need to ensur ...
EPFL2023

Equivariant Neural Architectures for Representing and Generating Graphs

Clément Arthur Yvon Vignac

Graph machine learning offers a powerful framework with natural applications in scientific fields such as chemistry, biology and material sciences. By representing data as a graph, we encode the prior knowledge that the data is composed of a set of entitie ...
EPFL2023

Arbitrary Decisions are a Hidden Cost of Differentially Private Training

Carmela González Troncoso, Bogdan Kulynych

Mechanisms used in privacy-preserving machine learning often aim to guarantee differential privacy (DP) during model training. Practical DP-ensuring training methods use randomization when fitting model parameters to privacy-sensitive data (e.g., adding Ga ...
New York2023
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.