This lecture focuses on dynamic games, exploring their structure and applications. The instructor begins by reviewing concepts from previous lectures, including correlated and coarse-correlated equilibria. The discussion transitions to dynamic games, emphasizing the shift from tree models to loop models. The instructor explains how feedback games can be represented in extensive form, detailing the evolution of states based on players' actions. The lecture covers the importance of information structures, distinguishing between open-loop and perfect state-feedback models. The instructor illustrates the dynamics of non-zero sum dynamic games using a three-truck platoon example, highlighting the players' objectives and the state evolution. The concept of subgame-perfect Nash Equilibrium is introduced, along with backward induction as a method for finding these equilibria. The lecture concludes with a brief overview of dynamic programming and its relevance to solving optimal feedback laws in multi-player scenarios, setting the stage for further exploration in subsequent sessions.