**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Lecture# Optimal Control: NMPC

Description

This lecture covers Nonlinear Model Predictive Control (NMPC) with topics including setpoint stabilization, terminal constraints, different NMPC formulations, turnpikes, dissipativity, economic NMPC, and Pontryagin's Maximum Principle. The instructor discusses the principles of optimality, singular problems, and stability in NMPC schemes.

Login to watch the video

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts (92)

Instructors (2)

In course

Stochastic control

Stochastic control or stochastic optimal control is a sub field of control theory that deals with the existence of uncertainty either in observations or in the noise that drives the evolution of the system. The system designer assumes, in a Bayesian probability-driven fashion, that random noise with known probability distribution affects the evolution and observation of the state variables. Stochastic control aims to design the time path of the controlled variables that performs the desired control task with minimum cost, somehow defined, despite the presence of this noise.

Model predictive control

Model predictive control (MPC) is an advanced method of process control that is used to control a process while satisfying a set of constraints. It has been in use in the process industries in chemical plants and oil refineries since the 1980s. In recent years it has also been used in power system balancing models and in power electronics. Model predictive controllers rely on dynamic models of the process, most often linear empirical models obtained by system identification.

Optimal control

Optimal control theory is a branch of mathematical optimization that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the moon with minimum fuel expenditure.

Bang–bang control

In control theory, a bang–bang controller (hysteresis, 2 step or on–off controller), is a feedback controller that switches abruptly between two states. These controllers may be realized in terms of any element that provides hysteresis. They are often used to control a plant that accepts a binary input, for example a furnace that is either completely on or completely off. Most common residential thermostats are bang–bang controllers. The Heaviside step function in its discrete form is an example of a bang–bang control signal.

Control theory

Control theory is a field of control engineering and applied mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any delay, overshoot, or steady-state error and ensuring a level of control stability; often with the aim to achieve a degree of optimality. To do this, a controller with the requisite corrective behavior is required.

EE-715: Optimal control

This doctoral course provides an introduction to optimal control covering fundamental theory, numerical implementation and problem formulation for applications.

Related lectures (26)

Nonlinear Model Predictive Control

Explores Nonlinear Model Predictive Control, covering stability, optimality, pitfalls, and examples.

Optimal Control: OCPs

Covers Optimal Control Problems focusing on necessary conditions, existence of optimal controls, and numerical solutions.

Linear Quadratic (LQ) Optimal Control: Proof of Theorem

Covers the proof of the recursive formula for the optimal gains in LQ control over a finite horizon.

Infinite-Horizon LQ Control: Solution & Example

Explores Infinite-Horizon Linear Quadratic (LQ) optimal control, emphasizing solution methods and practical examples.

Optimal Control Theory: OCPs

Covers Optimal Control Theory, focusing on Optimal Control Problems (OCPs) and the calculus of variations.