Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture covers the importance of model interpretability in modern Natural Language Processing (NLP). It delves into methods like model probing, local explanations, and model-generated explanations to understand how models make predictions. The lecture also explores the use of linguistic probes to interpret model behavior and the challenges of faithful explanations. Additionally, it discusses the limitations of interpretability methods and the need to differentiate between plausibility and faithfulness in model explanations.