Lecture

Pre-Training: BiLSTM and Transformer

Description

This lecture explores the concept of pre-training models in deep learning for natural language processing, focusing on BiLSTM and Transformer architectures. It delves into the use of contextualized word representations for transfer to new tasks, discussing models like ELMO, BERT, and GPT. The presentation covers the training process, architecture details, and applications of these models, showcasing their effectiveness in various NLP tasks. Additionally, it explains the motivation behind pre-training from word embeddings, the significance of language modeling, and the benefits of finetuning models for specific tasks.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.