Related publications (35)

Lower-bounds on the Bayesian Risk in Estimation Procedures via f–Divergences

Michael Christoph Gastpar, Adrien Vandenbroucque, Amedeo Roberto Esposito

We consider the problem of parameter estimation in a Bayesian setting and propose a general lower-bound that includes part of the family of f-Divergences. The results are then applied to specific settings of interest and compared to other notable results i ...
2022

Evaluating the Explainers: Black-Box Explainable Machine Learning for Student Success Prediction in MOOCs

Vinitra Swamy, Bahar Radmehr, Mirko Marras, Natasa Krco

Neural networks are ubiquitous in applied machine learning for education. Their pervasive success in predictive performance comes alongside a severe weakness, the lack of explainability of their decisions, especially relevant in human-centric fields. We im ...
2022

A Wasserstein-based measure of conditional dependence

Negar Kiyavash, Seyed Jalal Etesami, Kun Zhang

Measuring conditional dependencies among the variables of a network is of great interest to many disciplines. This paper studies some shortcomings of the existing dependency measures in detecting direct causal influences or their lack of ability for group ...
2022

Locally Differentially-Private Randomized Response for Discrete Distribution Learning

Michael Christoph Gastpar, Adriano Pastore

We consider a setup in which confidential i.i.d. samples X1, . . . , Xn from an unknown finite-support distribution p are passed through n copies of a discrete privatization chan- nel (a.k.a. mechanism) producing outputs Y1, . . . , Yn. The channel law gua ...
2021

Generalization Error Bounds Via Rényi-, f-Divergences and Maximal Leakage

Michael Christoph Gastpar, Amedeo Roberto Esposito, Ibrahim Issa

In this work, the probability of an event under some joint distribution is bounded by measuring it with the product of the marginals instead (which is typically easier to analyze) together with a measure of the dependence between the two random variables. ...
2021

Sequential Domain Adaptation by Synthesizing Distributionally Robust Experts

Daniel Kuhn, Viet Anh Nguyen, Bahar Taskesen

Least squares estimators, when trained on a few target domain samples, may predict poorly. Supervised domain adaptation aims to improve the predictive accuracy by exploiting additional labeled training samples from a source distribution that is close to th ...
2021

Robust Generalization via f−Mutual Information

Michael Christoph Gastpar, Amedeo Roberto Esposito, Ibrahim Issa

Given two probability measures P and Q and an event E, we provide bounds on P(E) in terms of Q(E) and f-divergences. In particular, the bounds are instantiated when the measures considered are a joint distribution and the corresponding product of marginals ...
IEEE2020

From Data to Decisions: Distributionally Robust Optimization is Optimal

Daniel Kuhn, Peyman Mohajerin Esfahani

We study stochastic programs where the decision-maker cannot observe the distribution of the exogenous uncertainties but has access to a finite set of independent samples from this distribution. In this setting, the goal is to find a procedure that transfo ...
2020

Time series data interpretation for ‘wheel-flat’ identification including uncertainties

Ian Smith, Numa Joy Bertola

Train wheel flats are formed when wheels slip on rails. Crucial for passenger comfort and the safe operation of train systems, early detection and quantification of wheel-flat severity without interrupting railway operations is a desirable and challenging ...
2019

Entropy as a tool for crystal discovery

Pablo Miguel Piaggi

The computational prediction of crystal structures has emerged as an useful alternative to expensive and often cumbersome experiments. We propose an approach to the prediction of crystal structures and polymorphism based on reproducing the crystallization ...
EPFL2019

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.