Lecture

Principled Reinforcement Learning with Human Feedback

Description

This lecture explores a theoretical framework for Reinforcement Learning with Human Feedback (RLHF) that deals with ordinal data, focusing on the convergence of estimators under different models. It discusses the challenges faced when training a policy based on learned reward models and introduces a pessimistic MLE for improved performance. The analysis validates the success of existing RLHF algorithms and provides insights for algorithm design, unifying RLHF and max-entropy Inverse Reinforcement Learning. The lecture also covers the formulation of RLHF, the Plackett-Luce model, and the connection with Inverse RL, along with experiments comparing different estimators and policies.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.