Optimal control theory is a branch of mathematical optimization that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the moon with minimum fuel expenditure. Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy. A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory.
Optimal control is an extension of the calculus of variations, and is a mathematical optimization method for deriving control policies. The method is largely due to the work of Lev Pontryagin and Richard Bellman in the 1950s, after contributions to calculus of variations by Edward J. McShane. Optimal control can be seen as a control strategy in control theory.
Optimal control deals with the problem of finding a control law for a given system such that a certain optimality criterion is achieved. A control problem includes a cost functional that is a function of state and control variables. An optimal control is a set of differential equations describing the paths of the control variables that minimize the cost function. The optimal control can be derived using Pontryagin's maximum principle (a necessary condition also known as Pontryagin's minimum principle or simply Pontryagin's principle), or by solving the Hamilton–Jacobi–Bellman equation (a sufficient condition).
We begin with a simple example. Consider a car traveling in a straight line on a hilly road. The question is, how should the driver press the accelerator pedal in order to minimize the total traveling time? In this example, the term control law refers specifically to the way in which the driver presses the accelerator and shifts the gears.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Learn how to describe, model and control urban traffic congestion in simple ways and gain insight into advanced traffic management schemes that improve mobility in cities and highways.
Learn how to describe, model and control urban traffic congestion in simple ways and gain insight into advanced traffic management schemes that improve mobility in cities and highways.
Develop your promising idea into a successful business concept proposal, and launch it! Gain practical experience in the key steps of the venture creation process, including marketing and fundraising.
A Bellman equation, named after Richard E. Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes the "value" of a decision problem at a certain point in time in terms of the payoff from some initial choices and the "value" of the remaining decision problem that results from those initial choices. This breaks a dynamic optimization problem into a sequence of simpler subproblems, as Bellman's “principle of optimality" prescribes.
The costate equation is related to the state equation used in optimal control. It is also referred to as auxiliary, adjoint, influence, or multiplier equation. It is stated as a vector of first order differential equations where the right-hand side is the vector of partial derivatives of the negative of the Hamiltonian with respect to the state variables. The costate variables can be interpreted as Lagrange multipliers associated with the state equations.
In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equation constraints (i.e., subject to the condition that one or more equations have to be satisfied exactly by the chosen values of the variables). It is named after the mathematician Joseph-Louis Lagrange. The basic idea is to convert a constrained problem into a form such that the derivative test of an unconstrained problem can still be applied.
This course covers methods for the analysis and control of systems with multiple inputs and outputs, which are ubiquitous in modern technology and industry. Special emphasis will be given to discrete-
This doctoral course provides an introduction to optimal control covering fundamental theory, numerical implementation and problem formulation for applications.
Covers the fundamentals of optimal control theory, focusing on defining OCPs, existence of solutions, performance criteria, physical constraints, and the principle of optimality.
The European Union's Green Deal aims for a 55% reduction in greenhouse gas emissions by 2030. To reach this goal, a massive integration of Renewable Energy Sources (RES) into the power grid is necessary. As RES become a large part of the electricity genera ...
This paper introduces a novel method for data-driven robust control of nonlinear systems based on the Koopman operator, utilizing Integral Quadratic Constraints (IQCs). The Koopman operator theory facilitates the linear representation of nonlinear system d ...
In this paper we will consider distributed Linear-Quadratic Optimal Control Problems dealing with Advection-Diffusion PDEs for high values of the Peclet number. In this situation, computational instabilities occur, both for steady and unsteady cases. A Str ...