This lecture introduces MuZero, a model that learns to predict rewards and actions iteratively, achieving state-of-the-art performance in board games and Atari video games. MuZero uses a learned latent representation and dynamics model to plan and act in the environment. The model is trained end-to-end to predict policy, value function, and reward. The lecture also covers MuZero's success story and its approach of learning an encoding of observations and transition functions. It concludes with insights on solving the problem of correlated samples in reinforcement learning.