Lecture

Causality for Robust ML

Description

This lecture by the instructor from Oregon State University discusses the indispensability of causality for robust and reliable machine learning. It covers topics such as ideal datasets, crucial problems with missing data, graphical models for encoding assumptions transparently, and the recoverability of missing data. The lecture also delves into problematic structures, testable implications, linear models for interference, and main results related to bias amplification and sample size impact. Theoretical impossibility theorems for missing data are explored, along with real-world examples of missing not at random scenarios. The lecture concludes with references to related works.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.