This lecture introduces Markov Decision Processes (MDPs), a foundational concept in reinforcement learning. The instructor begins by defining MDPs, emphasizing their structure, which includes a finite set of states and actions, transition probabilities, and immediate rewards. The lecture covers the formulation of MDPs, focusing on discrete state and action spaces, and explains the significance of immediate rewards and transition probabilities. The instructor discusses how to solve MDPs using dynamic programming and linear programming techniques, highlighting methods such as value iteration and policy iteration. Examples are provided to illustrate MDPs in practical scenarios, including a travel example to Rome, which demonstrates the application of absorbing states. The relationship between MDPs and reinforcement learning is also explored, clarifying that while MDPs assume known dynamics and rewards, reinforcement learning often deals with unknowns. The lecture concludes with exercises to reinforce understanding of MDPs and their applications in optimization problems.