**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Lecture# Optimal Control: OCPs

Description

This lecture covers Optimal Control Problems (OCPs) focusing on the Calculus of Variations, Geometric Optimality Conditions, and the Principle of Optimality. It delves into the necessary conditions of optimality, the existence of optimal controls, and the performance criteria in OCPs. The lecture also discusses the numerical solutions of OCPs, including the Hamilton-Jacobi-Bellman equation, the Pontryagin's Maximum Principle, and the shooting algorithms for solving OCPs.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Instructors (2)

In course

Related concepts (182)

Related lectures (34)

Optimal control

Optimal control theory is a branch of mathematical optimization that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the moon with minimum fuel expenditure.

Model predictive control

Model predictive control (MPC) is an advanced method of process control that is used to control a process while satisfying a set of constraints. It has been in use in the process industries in chemical plants and oil refineries since the 1980s. In recent years it has also been used in power system balancing models and in power electronics. Model predictive controllers rely on dynamic models of the process, most often linear empirical models obtained by system identification.

Bellman equation

A Bellman equation, named after Richard E. Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes the "value" of a decision problem at a certain point in time in terms of the payoff from some initial choices and the "value" of the remaining decision problem that results from those initial choices. This breaks a dynamic optimization problem into a sequence of simpler subproblems, as Bellman's “principle of optimality" prescribes.

Stochastic control

Stochastic control or stochastic optimal control is a sub field of control theory that deals with the existence of uncertainty either in observations or in the noise that drives the evolution of the system. The system designer assumes, in a Bayesian probability-driven fashion, that random noise with known probability distribution affects the evolution and observation of the state variables. Stochastic control aims to design the time path of the controlled variables that performs the desired control task with minimum cost, somehow defined, despite the presence of this noise.

Pontryagin's maximum principle

Pontryagin's maximum principle is used in optimal control theory to find the best possible control for taking a dynamical system from one state to another, especially in the presence of constraints for the state or input controls. It states that it is necessary for any optimal control along with the optimal state trajectory to solve the so-called Hamiltonian system, which is a two-point boundary value problem, plus a maximum condition of the control Hamiltonian.

EE-715: Optimal control

This doctoral course provides an introduction to optimal control covering fundamental theory, numerical implementation and problem formulation for applications.

Optimal Control Theory: BasicsEE-715: Optimal control

Covers the fundamentals of optimal control theory, focusing on defining OCPs, existence of solutions, performance criteria, physical constraints, and the principle of optimality.

Nonlinear Programming: Part IEE-715: Optimal control

Covers the fundamentals of Nonlinear Programming and its applications in Optimal Control, exploring techniques, examples, optimality definitions, and necessary conditions.

Nonlinear Model Predictive ControlEE-715: Optimal control

Explores Nonlinear Model Predictive Control, covering stability, optimality, pitfalls, and examples.

Optimal Control Theory: OCPsEE-715: Optimal control

Covers Optimal Control Theory, focusing on Optimal Control Problems (OCPs) and the calculus of variations.

Nonlinear Model Predictive Control: Stability and Design StepsEE-715: Optimal control

Explores Nonlinear Model Predictive Control principles, stability analysis, design steps, and practical considerations.