This lecture explores the concepts of systematicity and compositionality in human languages and neural representations. It discusses the relationship between syntax and semantics, emphasizing the importance of understanding how parts contribute to the meaning of a whole. The lecture delves into the challenges of neural networks in generalizing systematically, highlighting the need for dynamic benchmarks to evaluate models accurately. It also addresses the role of unsupervised learning in deep learning for natural language processing, questioning the extent to which linguistic properties can be captured without supervision.
This video is available exclusively on Mediaspace for a restricted audience. Please log in to MediaSpace to access it if you have the necessary permissions.
Watch on Mediaspace