Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture discusses the challenges of representing input spaces in reinforcement learning, where tabular methods become impractical due to the vast number of states. The instructor explains how neural networks and radial basis function networks can be used to model continuous input spaces, using examples like Backgammon and Mountain Car. The lecture covers the transition from tabular TD learning to neural network representations, including the use of error functions and gradients. Various variations of TD learning algorithms are explored, emphasizing the importance of using neural networks to generalize Q-values in continuous spaces.