This lecture covers the concept of robustness in visual intelligence, focusing on resilience to distribution shifts, including non-adversarial shifts like lighting changes and adversarial attacks. It discusses failure examples, reasons for failure such as biased data distribution, and strategies to improve robustness through (pre)training with diverse data and data augmentation.
This video is available exclusively on Mediaspace for a restricted audience. Please log in to MediaSpace to access it if you have the necessary permissions.
Watch on Mediaspace