Publications associées (293)

Efficient local linearity regularization to overcome catastrophic overfitting

Volkan Cevher, Grigorios Chrysos, Fanghui Liu, Elias Abad Rocamora

Catastrophic overfitting (CO) in single-step adversarial training (AT) results in abrupt drops in the adversarial test accuracy (even down to 0%). For models trained with multi-step AT, it has been observed that the loss function behaves locally linearly w ...
2024

Towards Trustworthy Deep Learning for Image Reconstruction

Alexis Marie Frederic Goujon

The remarkable ability of deep learning (DL) models to approximate high-dimensional functions from samples has sparked a revolution across numerous scientific and industrial domains that cannot be overemphasized. In sensitive applications, the good perform ...
EPFL2024

High-Dimensional Kernel Methods under Covariate Shift: Data-Dependent Implicit Regularization

Volkan Cevher, Fanghui Liu

This paper studies kernel ridge regression in high dimensions under covariate shifts and analyzes the role of importance re-weighting. We first derive the asymptotic expansion of high dimensional kernels under covariate shifts. By a bias-variance decomposi ...
2024

BENIGN LANDSCAPES OF LOW-DIMENSIONAL RELAXATIONS FOR ORTHOGONAL SYNCHRONIZATION ON GENERAL GRAPHS

Nicolas Boumal

Orthogonal group synchronization is the problem of estimating n elements Z(1),& mldr;,Z(n) from the rxr orthogonal group given some relative measurements R-ij approximate to Z(i)Z(j)(-1). The least-squares formulation is nonconvex. To avoid its local minim ...
Siam Publications2024

Understanding generalization and robustness in modern deep learning

Maksym Andriushchenko

In this thesis, we study two closely related directions: robustness and generalization in modern deep learning. Deep learning models based on empirical risk minimization are known to be often non-robust to small, worst-case perturbations known as adversari ...
EPFL2024

Bayes-optimal Learning of Deep Random Networks of Extensive-width

Florent Gérard Krzakala, Lenka Zdeborová, Hugo Chao Cui

We consider the problem of learning a target function corresponding to a deep, extensive-width, non-linear neural network with random Gaussian weights. We consider the asymptotic limit where the number of samples, the input dimension and the network width ...
2023

Regularization Techniques for Low-Resource Machine Translation

Alejandro Ramírez Atrio

Neural machine translation (MT) and text generation have recently reached very high levels of quality. However, both areas share a problem: in order to reach these levels, they require massive amounts of data. When this is not present, they lack generaliza ...
EPFL2023

A Neural-Network-Based Convex Regularizer for Inverse Problems

Michaël Unser, Pakshal Narendra Bohra, Alexis Marie Frederic Goujon, Sebastian Jonas Neumayer, Stanislas Ducotterd

The emergence of deep-learning-based methods to solve image-reconstruction problems has enabled a significant increase in quality. Unfortunately, these new methods often lack reliability and explainability, and there is a growing interest to address these ...
2023

Regularization of polynomial networks for image recognition

Volkan Cevher, Grigorios Chrysos, Bohan Wang

Deep Neural Networks (DNNs) have obtained impressive performance across tasks, however they still remain as black boxes, e.g., hard to theoretically analyze. At the same time, Polynomial Networks (PNs) have emerged as an alternative method with a promising ...
2023

Automatic Parameterization for Aerodynamic Shape Optimization via Deep Geometric Learning

Pascal Fua, Zhen Wei

We propose two deep learning models that fully automate shape parameterization for aerodynamic shape optimization. Both models are optimized to parameterize via deep geometric learning to embed human prior knowledge into learned geometric patterns, elimina ...
2023

Graph Chatbot

Chattez avec Graph Search

Posez n’importe quelle question sur les cours, conférences, exercices, recherches, actualités, etc. de l’EPFL ou essayez les exemples de questions ci-dessous.

AVERTISSEMENT : Le chatbot Graph n'est pas programmé pour fournir des réponses explicites ou catégoriques à vos questions. Il transforme plutôt vos questions en demandes API qui sont distribuées aux différents services informatiques officiellement administrés par l'EPFL. Son but est uniquement de collecter et de recommander des références pertinentes à des contenus que vous pouvez explorer pour vous aider à répondre à vos questions.