Explores neuro-symbolic representations for understanding commonsense knowledge and reasoning, emphasizing the challenges and limitations of deep learning in natural language processing.
Delves into Deep Learning for Natural Language Processing, exploring Neural Word Embeddings, Recurrent Neural Networks, and Attentive Neural Modeling with Transformers.
Covers the basics of Natural Language Processing, from traditional to modern approaches, highlighting the challenges and importance of studying both methods.