Publication

Not All Samples Are Created Equal: Deep Learning with Importance Sampling

Related publications (39)

Statistical Inference for Inverse Problems: From Sparsity-Based Methods to Neural Networks

Pakshal Narendra Bohra

In inverse problems, the task is to reconstruct an unknown signal from its possibly noise-corrupted measurements. Penalized-likelihood-based estimation and Bayesian estimation are two powerful statistical paradigms for the resolution of such problems. They ...
EPFL2024

Deep Learning Generalization with Limited and Noisy Labels

Mahsa Forouzesh

Deep neural networks have become ubiquitous in today's technological landscape, finding their way in a vast array of applications. Deep supervised learning, which relies on large labeled datasets, has been particularly successful in areas such as image cla ...
EPFL2023

A Statistical Framework to Investigate the Optimality of Signal-Reconstruction Methods

Michaël Unser, Pakshal Narendra Bohra

We present a statistical framework to benchmark the performance of reconstruction algorithms for linear inverse problems, in particular, neural-network-based methods that require large quantities of training data. We generate synthetic signals as realizati ...
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC2023

An equilibrated flux a posteriori error estimator for defeaturing problems

Annalisa Buffa, Denise Grappein, Rafael Vazquez Hernandez, Ondine Gabrielle Chanon

An a posteriori error estimator based on an equilibrated flux reconstruction is proposed for defeaturing problems in the context of finite element discretizations. Defeaturing consists in the simplification of a geometry by removing features that are consi ...
2023

Transformer Models for Vision

Jean-Baptiste Francis Marie Juliette Cordonnier

The recent developments of deep learning cover a wide variety of tasks such as image classification, text translation, playing go, and folding proteins.All these successful methods depend on a gradient-based learning algorithm to train a model on massive a ...
EPFL2023

Alpha-NML Universal Predictors

Michael Christoph Gastpar, Marco Bondaschi

Inspired by Sibson’s alpha-mutual information, we introduce a new parametric class of universal predictors. This class interpolates two well-known predictors, the mixture estimator, that includes the Laplace and the Krichevsky-Trofimov predictors, and the ...
2022

Implicit Bias of SGD for Diagonal Linear Networks: a Provable Benefit of Stochasticity

Nicolas Henri Bernard Flammarion, Scott William Pesme, Loucas Pillaud-Vivien

Understanding the implicit bias of training algorithms is of crucial importance in order to explain the success of overparametrised neural networks. In this paper, we study the dynamics of stochastic gradient descent over diagonal linear networks through i ...
2021

MATHICSE Technical Report: Eigenfunction martingale estimating functions and filtered data for drift estimation of discretely observed multiscale

Assyr Abdulle, Andrea Zanoni, Grigorios A. Pavliotis

We propose a novel method for drift estimation of multiscale diffusion processes when a sequence of discrete observations is given. For the Langevin dynamics in a two-scale potential, our approach relies on the eigenvalues and the eigenfunctions of the hom ...
MATHICSE2021

Multi-ion-sensing emulator and multivariate calibration optimization by machine learning models

Giovanni De Micheli, Sandro Carrara, Mandresy Ivan Ny Hanitra, Francesca Criscuolo

One paramount challenge in multi-ion-sensing arises from ion interference that degrades the accuracy of sensor calibration. Machine learning models are here proposed to optimize such multivariate calibration. However, the acquisition of big experimental da ...
2021

Loss landscape and symmetries in Neural Networks

Mario Geiger

Neural networks (NNs) have been very successful in a variety of tasks ranging from machine translation to image classification. Despite their success, the reasons for their performance are still not well-understood. This thesis explores two main themes: lo ...
EPFL2021

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.