This lecture covers coreference resolution models, including end-to-end models and BERT-based approaches for better span-based prediction tasks. It also discusses the challenges of scoring every pair of spans and the importance of attention in identifying coreferent mentions. The lecture further explores graph refinement techniques using Graph2Graph Transformer and evaluates the state-of-the-art results for coreference resolution models. It concludes with a summary highlighting the significance of coreference in discourse and the impact of pretrained Transformers on accuracy.
This video is available exclusively on Mediaspace for a restricted audience. Please log in to MediaSpace to access it if you have the necessary permissions.
Watch on Mediaspace