Related publications (448)

Robust Collaborative Learning with Linear Gradient Overhead

Rachid Guerraoui, Nirupam Gupta, John Stephan, Sadegh Farhadkhani, Le Nguyen Hoang, Rafaël Benjamin Pinot

Collaborative learning algorithms, such as distributed SGD (or D-SGD), are prone to faulty machines that may deviate from their prescribed algorithm because of software or hardware bugs, poisoned data or malicious behaviors. While many solutions have been ...
PLMR2023

Byzantine Fault-Tolerance in Federated Local SGD Under 2f-Redundancy

Nirupam Gupta

In this article, we study the problem of Byzantine fault-tolerance in a federated optimization setting, where there is a group of agents communicating with a centralized coordinator. We allow up to ff Byzantine-faulty agents, which may not follow a prescr ...
Piscataway2023

Phenomenological theory of variational quantum ground-state preparation

Ivano Tavernelli, Giuseppe Carleo

The variational approach is a cornerstone of computational physics, considering both conventional and quantum computing computational platforms. The variational quantum eigensolver algorithm aims to prepare the ground state of a Hamiltonian exploiting para ...
College Pk2023

Universal and adaptive methods for robust stochastic optimization

Ali Kavis

Within the context of contemporary machine learning problems, efficiency of optimization process depends on the properties of the model and the nature of the data available, which poses a significant problem as the complexity of either increases ad infinit ...
EPFL2023

(S)GD over Diagonal Linear Networks: Implicit Regularisation, Large Stepsizes and Edge of Stability

Nicolas Henri Bernard Flammarion, Scott William Pesme, Mathieu Even

In this paper, we investigate the impact of stochasticity and large stepsizes on the implicit regularisation of gradient descent (GD) and stochastic gradient descent (SGD) over diagonal linear networks. We prove the convergence of GD and SGD with macroscop ...
2023

Fast Adversarial Training With Adaptive Step Size

Sabine Süsstrunk, Mathieu Salzmann, Chen Liu, Zhuoyi Huang, Yong Zhang, Jue Wang

While adversarial training and its variants have shown to be the most effective algorithms to defend against adversarial attacks, their extremely slow training process makes it hard to scale to large datasets like ImageNet. The key idea of recent works to ...
Piscataway2023

Regularization Techniques for Low-Resource Machine Translation

Alejandro Ramírez Atrio

Neural machine translation (MT) and text generation have recently reached very high levels of quality. However, both areas share a problem: in order to reach these levels, they require massive amounts of data. When this is not present, they lack generaliza ...
EPFL2023

Time Series Analysis of Urban Liveability

Devis Tuia, Diego Marcos Gonzalez, Alex Hubertus Levering

In this paper we explore deep learning models to monitor longitudinal liveability changes in Dutch cities at the neighbourhood level. Our liveability reference data is defined by a country-wise yearly survey based on a set of indicators combined into a liv ...
IEEE2023

New Perspectives on Regularization and Computation in Optimal Transport-Based Distributionally Robust Optimization

Daniel Kuhn, Florian Dörfler, Soroosh Shafieezadeh Abadeh

We study optimal transport-based distributionally robust optimization problems where a fictitious adversary, often envisioned as nature, can choose the distribution of the uncertain problem parameters by reshaping a prescribed reference distribution at a f ...
2023

Augmented Lagrangian Methods for Provable and Scalable Machine Learning

Mehmet Fatih Sahin

Non-convex constrained optimization problems have become a powerful framework for modeling a wide range of machine learning problems, with applications in k-means clustering, large- scale semidefinite programs (SDPs), and various other tasks. As the perfor ...
EPFL2023

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.