Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture explores the concept of adversarial examples in machine learning, focusing on creating attacks to mislead classifiers and the defenses against them. It delves into certifiable and provable defenses, including Gaussian smoothing for L2 attacks. The instructor discusses the landscape of provable defenses and introduces the Neural Perceptual Threat Model to approximate true perceptual distance. Various attacks, such as L2 and perceptual attacks, are analyzed, along with the development of a defense with generalizable robustness. The lecture concludes with results on CIFAR-10 and ImageNet-100 datasets, showcasing the effectiveness of different defense strategies.