Lecture

Pretraining: Transformers & Models

Description

This lecture explores the concept of pretraining models, focusing on the use of word embeddings and contextualized word representations for transfer learning. It covers key models like BERT, T5, and GPT, discussing their training objectives and applications. The lecture delves into the motivations behind model pretraining, the effectiveness of large models, and the benefits of in-context learning. It also examines the limitations of pretrained encoders, the extensions of BERT, and the advancements in parameter-efficient finetuning methods. Additionally, it discusses the pretraining of encoders and decoders, the GPT series, and the implications of very large language models like GPT-3.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.