Publication

Regularized Diffusion Adaptation via Conjugate Smoothing

Publications associées (37)

Extensions of Peer Prediction Incentive Mechanisms

Adam Julian Richardson

As large, data-driven artificial intelligence models become ubiquitous, guaranteeing high data quality is imperative for constructing models. Crowdsourcing, community sensing, and data filtering have long been the standard approaches to guaranteeing or imp ...
EPFL2024

Efficient local linearity regularization to overcome catastrophic overfitting

Volkan Cevher, Grigorios Chrysos, Fanghui Liu, Elias Abad Rocamora

Catastrophic overfitting (CO) in single-step adversarial training (AT) results in abrupt drops in the adversarial test accuracy (even down to 0%). For models trained with multi-step AT, it has been observed that the loss function behaves locally linearly w ...
2024

Memory of Motion for Initializing Optimization in Robotics

Teguh Santoso Lembono

Many robotics problems are formulated as optimization problems. However, most optimization solvers in robotics are locally optimal and the performance depends a lot on the initial guess. For challenging problems, the solver will often get stuck at poor loc ...
EPFL2022

Semi-Discrete Optimal Transport: Hardness, Regularization and Numerical Solution

Daniel Kuhn, Soroosh Shafieezadeh Abadeh, Bahar Taskesen

Semi-discrete optimal transport problems, which evaluate the Wasserstein distance between a discrete and a generic (possibly non-discrete) probability measure, are believed to be computationally hard. Even though such problems are ubiquitous in statistics, ...
2021

Comparison of non-parametric T2 relaxometry methods for myelin water quantification

Jean-Philippe Thiran, Tobias Kober, Tom Hilbert, Erick Jorge Canales Rodriguez, Marco Pizzolato, Gian Franco Piredda, Thomas Yu, Alessandro Daducci, Nicolas Kunz

Multi-component T2 relaxometry allows probing tissue microstructure by assessing compartment-specific T2 relaxation times and water fractions, including the myelin water fraction. Non-negative least squares (NNLS) with zero-order Tikhonov regularization is ...
2021

Wasserstein Distributionally Robust Learning

Soroosh Shafieezadeh Abadeh

Many decision problems in science, engineering, and economics are affected by uncertainty, which is typically modeled by a random variable governed by an unknown probability distribution. For many practical applications, the probability distribution is onl ...
EPFL2020

From Data to Decisions: Distributionally Robust Optimization is Optimal

Daniel Kuhn, Peyman Mohajerin Esfahani

We study stochastic programs where the decision-maker cannot observe the distribution of the exogenous uncertainties but has access to a finite set of independent samples from this distribution. In this setting, the goal is to find a procedure that transfo ...
2020

Scalable Stochastic Optimization: Scenario Reduction with Guarantees

Kilian Schindler

Stochastic optimization is a popular modeling paradigm for decision-making under uncertainty and has a wide spectrum of applications in management science, economics and engineering. However, the stochastic optimization models one faces in practice are int ...
EPFL2020

Distributional Robustness in Mechanism Design

Cagil Kocyigit

Mechanism design theory examines the design of allocation mechanisms or incentive systems involving multiple rational but self-interested agents and plays a central role in many societally important problems in economics. In mechanism design problems, agen ...
EPFL2020

A Unifying Representer Theorem for Inverse Problems and Machine Learning

Michaël Unser

Regularization addresses the ill-posedness of the training problem in machine learning or the reconstruction of a signal from a limited number of measurements. The method is applicable whenever the problem is formulated as an optimization task. The standar ...
2020

Graph Chatbot

Chattez avec Graph Search

Posez n’importe quelle question sur les cours, conférences, exercices, recherches, actualités, etc. de l’EPFL ou essayez les exemples de questions ci-dessous.

AVERTISSEMENT : Le chatbot Graph n'est pas programmé pour fournir des réponses explicites ou catégoriques à vos questions. Il transforme plutôt vos questions en demandes API qui sont distribuées aux différents services informatiques officiellement administrés par l'EPFL. Son but est uniquement de collecter et de recommander des références pertinentes à des contenus que vous pouvez explorer pour vous aider à répondre à vos questions.