Skip to main content
Graph
Search
fr
|
en
Switch to dark mode
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Dynamic Programming: Optimal Control
Graph Chatbot
Related lectures (32)
Previous
Page 1 of 4
Next
Controlled Stochastic Processes
Explores controlled stochastic processes, focusing on analysis, behavior, and optimization, using dynamic programming to solve real-world problems.
Nonlinear Model Predictive Control
Explores Nonlinear Model Predictive Control, covering stability, optimality, pitfalls, and examples.
Linear Quadratic (LQ) Optimal Control: Proof of Theorem
Covers the proof of the recursive formula for the optimal gains in LQ control over a finite horizon.
Dynamic Programming: Optimal Control
Explores Dynamic Programming for optimal control, covering machine replacement, Markov chains, control policies, and linear quadratic problems.
Infinite-Horizon LQ Control: Solution & Example
Explores Infinite-Horizon Linear Quadratic (LQ) optimal control, emphasizing solution methods and practical examples.
Markov Decision Processes: Foundations of Reinforcement Learning
Covers Markov Decision Processes, their structure, and their role in reinforcement learning.
Interactive Lecture: Reinforcement Learning
Explores advanced reinforcement learning topics, including policies, value functions, Bellman recursion, and on-policy TD control.
Stability: Poles, Zeros, and Control
Covers stability, poles, zeros, and control in dynamic systems, emphasizing the importance of observability.
Value Iteration Acceleration: PID and Operator Splitting
Explores accelerating the Value Iteration algorithm using control theory and matrix splitting techniques to achieve faster convergence.
Asset Selling Problem
Explores the Asset Selling Problem to maximize long-term reward without a deadline.