**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Lecture# Optimal Control Theory: Basics

Description

This lecture covers the fundamentals of optimal control theory, focusing on optimal control problems (OCPs). Topics include defining OCPs, admissible controls, dynamical systems, existence of solutions, performance criteria, physical constraints, and the equivalence of performance criteria. The lecture also delves into the calculus of variations, Gâteaux derivative, weak and strong minima, and geometric conditions of optimality. The principle of optimality, interpretation of adjoint variables, and OCPs with terminal equality constraints are discussed, along with the necessary conditions of optimality and the principle of optimality. The lecture concludes with an overview of necessary conditions and extensions to input and state constraints.

Login to watch the video

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

In course

Instructors (2)

Related concepts (122)

EE-715: Optimal control

This doctoral course provides an introduction to optimal control covering fundamental theory, numerical implementation and problem formulation for applications.

,

Model predictive control (MPC) is an advanced method of process control that is used to control a process while satisfying a set of constraints. It has been in use in the process industries in chemical plants and oil refineries since the 1980s. In recent years it has also been used in power system balancing models and in power electronics. Model predictive controllers rely on dynamic models of the process, most often linear empirical models obtained by system identification.

Mathematical optimization (alternatively spelled optimisation) or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.

Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning. Reinforcement learning differs from supervised learning in not needing labelled input/output pairs to be presented, and in not needing sub-optimal actions to be explicitly corrected.

Optimal control theory is a branch of mathematical optimization that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the moon with minimum fuel expenditure.

The Hamiltonian is a function used to solve a problem of optimal control for a dynamical system. It can be understood as an instantaneous increment of the Lagrangian expression of the problem that is to be optimized over a certain time period. Inspired by—but distinct from—the Hamiltonian of classical mechanics, the Hamiltonian of optimal control theory was developed by Lev Pontryagin as part of his maximum principle. Pontryagin proved that a necessary condition for solving the optimal control problem is that the control should be chosen so as to optimize the Hamiltonian.

Related lectures (25)

Optimal Control: OCPs

Covers Optimal Control Problems focusing on necessary conditions, existence of optimal controls, and numerical solutions.

Optimal Control Theory: OCPs

Covers Optimal Control Theory, focusing on Optimal Control Problems (OCPs) and the calculus of variations.

Nonlinear Model Predictive Control

Explores Nonlinear Model Predictive Control, covering stability, optimality, pitfalls, and examples.

Nonlinear Programming: Part I

Covers the fundamentals of Nonlinear Programming and its applications in Optimal Control, exploring techniques, examples, optimality definitions, and necessary conditions.

Nonlinear Model Predictive Control: Stability and Design Steps

Explores Nonlinear Model Predictive Control principles, stability analysis, design steps, and practical considerations.