Covers model-free prediction methods in reinforcement learning, focusing on Monte Carlo and Temporal Differences for estimating value functions without transition dynamics knowledge.
Introduces reinforcement learning, covering its definitions, applications, and theoretical foundations, while outlining the course structure and objectives.
Covers the fundamentals of optimal control theory, focusing on defining OCPs, existence of solutions, performance criteria, physical constraints, and the principle of optimality.
Explores the stability of Ordinary Differential Equations, focusing on solution dependence, critical data, linearization, and control of nonlinear systems.