Publication

From Kernel Methods to Neural Networks: A Unifying Variational Formulation

Publications associées (293)

Probabilistic methods for neural combinatorial optimization

Nikolaos Karalias

The monumental progress in the development of machine learning models has led to a plethora of applications with transformative effects in engineering and science. This has also turned the attention of the research community towards the pursuit of construc ...
EPFL2023

A Theory of Finite-Width Neural Networks: Generalization, Scaling Laws, and the Loss Landscape

Berfin Simsek

Deep learning has achieved remarkable success in various challenging tasks such as generating images from natural language or engaging in lengthy conversations with humans.The success in practice stems from the ability to successfully train massive neural ...
EPFL2023

How deep convolutional neural networks lose spatial information with training

Matthieu Wyart, Leonardo Petrini, Umberto Maria Tomasini, Francesco Cagnetta

A central question of machine learning is how deep nets manage to learn tasks in high dimensions. An appealing hypothesis is that they achieve this feat by building a representation of the data where information irrelevant to the task is lost. For image da ...
Bristol2023

Hamiltonian Deep Neural Networks Guaranteeing Non-Vanishing Gradients by Design

Giancarlo Ferrari Trecate, Luca Furieri, Clara Lucía Galimberti, Liang Xu

Deep Neural Networks (DNNs) training can be difficult due to vanishing and exploding gradients during weight optimization through backpropagation. To address this problem, we propose a general class of Hamiltonian DNNs (H-DNNs) that stem from the discretiz ...
2023

Leveraging Unlabeled Data to Track Memorization

Patrick Thiran, Mahsa Forouzesh, Hanie Sedghi

Deep neural networks may easily memorize noisy labels present in real-world data, which degrades their ability to generalize. It is therefore important to track and evaluate the robustness of models against noisy label memorization. We propose a metric, ca ...
2023

Penalising the biases in norm regularisation enforces sparsity

Nicolas Henri Bernard Flammarion, Etienne Patrice Boursier

Controlling the parameters' norm often yields good generalisation when training neural networks. Beyond simple intuitions, the relation between parameters' norm and obtained estimators theoretically remains misunderstood. For one hidden ReLU layer networks ...
2023

Phase Retrieval: From Computational Imaging to Machine Learning: A tutorial

Michaël Unser, Thanh-An Michel Pham, Jonathan Yuelin Dong

Phase retrieval consists in the recovery of a complex-valued signal from intensity-only measurements. As it pervades a broad variety of applications, many researchers have striven to develop phase-retrieval algorithms. Classical approaches involve techniqu ...
2023

Polynomial-time universality and limitations of deep learning

Emmanuel Abbé

The goal of this paper is to characterize function distributions that general neural networks trained by descent algorithms (GD/SGD), can or cannot learn in polytime. The results are: (1) The paradigm of general neural networks trained by SGD is poly-time ...
WILEY2023

Learning ground states of gapped quantum Hamiltonians with Kernel Methods

Giuseppe Carleo, Riccardo Rossi, Clemens Giuliani, Filippo Vicentini

Neural network approaches to approximate the ground state of quantum hamiltonians require the numerical solution of a highly nonlinear optimization problem. We introduce a statistical learning approach that makes the optimization trivial by using kernel me ...
Wien2023

ReLU Neural Network Galerkin BEM

Fernando José Henriquez Barraza

We introduce Neural Network (NN for short) approximation architectures for the numerical solution of Boundary Integral Equations (BIEs for short). We exemplify the proposed NN approach for the boundary reduction of the potential problem in two spatial dime ...
SPRINGER/PLENUM PUBLISHERS2023

Graph Chatbot

Chattez avec Graph Search

Posez n’importe quelle question sur les cours, conférences, exercices, recherches, actualités, etc. de l’EPFL ou essayez les exemples de questions ci-dessous.

AVERTISSEMENT : Le chatbot Graph n'est pas programmé pour fournir des réponses explicites ou catégoriques à vos questions. Il transforme plutôt vos questions en demandes API qui sont distribuées aux différents services informatiques officiellement administrés par l'EPFL. Son but est uniquement de collecter et de recommander des références pertinentes à des contenus que vous pouvez explorer pour vous aider à répondre à vos questions.