Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture covers the basics of Reinforcement Learning (RL) as a sequential decision problem, focusing on discrete states and actions, policies, value functions, Markov Decision Processes (MDP), Bellman equations, and methods for finding optimal policies. The instructor explains the concepts of Dynamic Programming, Monte-Carlo Sampling, and Temporal-Difference Learning for estimating value functions. The lecture also delves into the Bellman Optimality Equation, control strategies, and the iterative process of learning the optimal policy. Drawbacks of standard RL, such as the curse of dimensionality and the challenges of dealing with continuous state and action spaces, are discussed.