Lecture

Provable and Generalizable Robustness in Deep Learning

Description

This lecture explores the concept of adversarial examples in machine learning, focusing on creating attacks to mislead classifiers and the defenses against them. It delves into certifiable and provable defenses, including Gaussian smoothing for L2 attacks. The instructor discusses the landscape of provable defenses and introduces the Neural Perceptual Threat Model to approximate true perceptual distance. Various attacks, such as L2 and perceptual attacks, are analyzed, along with the development of a defense with generalizable robustness. The lecture concludes with results on CIFAR-10 and ImageNet-100 datasets, showcasing the effectiveness of different defense strategies.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.