Publication

Multi-Level Monte Carlo Methods for Uncertainty Quantification and Risk-Averse Optimisation

Publications associées (89)

On the Generalization of Stochastic Gradient Descent with Momentum

Volkan Cevher, Kimon Antonakopoulos

While momentum-based accelerated variants of stochastic gradient descent (SGD) are widely used when training machine learning models, there is little theoretical understanding on the generalization error of such methods. In this work, we first show that th ...
Brookline2024

Augmented Memory: Sample-Efficient Generative Molecular Design with Reinforcement Learning

Philippe Schwaller, Jeff Guo

Sample efficiency is a fundamental challenge in de novo molecular design. Ideally, molecular generative models should learn to satisfy a desired objective under minimal calls to oracles (computational property predictors). This problem becomes more apparen ...
Amer Chemical Soc2024

A Study on Gradient-based Meta-learning for Robust Deep Digital Twins

Olga Fink, Raffael Pascal Theiler, Michele Viscione

Deep-learning-based digital twins (DDT) are a promising tool for data-driven system health management because they can be trained directly on operational data. A major challenge for efficient training however is that industrial datasets remain unlabeled. T ...
Research Publishing2023

Graph Chatbot

Chattez avec Graph Search

Posez n’importe quelle question sur les cours, conférences, exercices, recherches, actualités, etc. de l’EPFL ou essayez les exemples de questions ci-dessous.

AVERTISSEMENT : Le chatbot Graph n'est pas programmé pour fournir des réponses explicites ou catégoriques à vos questions. Il transforme plutôt vos questions en demandes API qui sont distribuées aux différents services informatiques officiellement administrés par l'EPFL. Son but est uniquement de collecter et de recommander des références pertinentes à des contenus que vous pouvez explorer pour vous aider à répondre à vos questions.