Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture explores the concept of adversarial machine learning, focusing on how small perturbations can cause misclassifications with high confidence in neural networks. It delves into the security implications, risks, and vulnerabilities associated with adversarial examples. The instructor discusses the generation of adversarial examples, different attack methods, and the challenges in optimizing classification losses. Additionally, the lecture covers white-box and black-box attacks, transfer attacks, and the importance of physically realizable attacks. The instructor also explains adversarial training as a method to train robust models and the trade-off between accuracy and robustness.
This video is available exclusively on Mediaspace for a restricted audience. Please log in to MediaSpace to access it if you have the necessary permissions.
Watch on Mediaspace