The Hamiltonian is a function used to solve a problem of optimal control for a dynamical system. It can be understood as an instantaneous increment of the Lagrangian expression of the problem that is to be optimized over a certain time period. Inspired by—but distinct from—the Hamiltonian of classical mechanics, the Hamiltonian of optimal control theory was developed by Lev Pontryagin as part of his maximum principle. Pontryagin proved that a necessary condition for solving the optimal control problem is that the control should be chosen so as to optimize the Hamiltonian.
Consider a dynamical system of first-order differential equations
where denotes a vector of state variables, and a vector of control variables. Once initial conditions and controls are specified, a solution to the differential equations, called a trajectory , can be found. The problem of optimal control is to choose (from some set ) so that maximizes or minimizes a certain objective function between an initial time and a terminal time (where may be infinity). Specifically, the goal is to optimize a performance index at each point in time,
subject to the above equations of motion of the state variables. The solution method involves defining an ancillary function known as the control Hamiltonian
which combines the objective function and the state equations much like a Lagrangian in a static optimization problem, only that the multipliers —referred to as costate variables—are functions of time rather than constants.
The goal is to find an optimal control policy function and, with it, an optimal trajectory of the state variable , which by Pontryagin's maximum principle are the arguments that maximize the Hamiltonian,
for all
The first-order necessary conditions for a maximum are given by
which is the maximum principle,
which generates the state transition function ,
which generates
the latter of which are referred to as the costate equations.
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
This doctoral course provides an introduction to optimal control covering fundamental theory, numerical implementation and problem formulation for applications.
Discrete mathematics is a discipline with applications to almost all areas of study. It provides a set of indispensable tools to computer science in particular. This course reviews (familiar) topics a
Couvre les principes fondamentaux de la théorie du contrôle optimal, en se concentrant sur la définition des OCP, l'existence de solutions, les critères de performance, les contraintes physiques et le principe d'optimalité.
A shadow price is the monetary value assigned to an abstract or intangible commodity which is not traded in the marketplace. This often takes the form of an externality. Shadow prices are also known as the recalculation of known market prices in order to account for the presence of distortionary market instruments (e.g. quotas, tariffs, taxes or subsidies). Shadow prices are the real economic prices given to goods and services after they have been appropriately adjusted by removing distortionary market instruments and incorporating the societal impact of the respective good or service.
Pontryagin's maximum principle is used in optimal control theory to find the best possible control for taking a dynamical system from one state to another, especially in the presence of constraints for the state or input controls. It states that it is necessary for any optimal control along with the optimal state trajectory to solve the so-called Hamiltonian system, which is a two-point boundary value problem, plus a maximum condition of the control Hamiltonian.
La théorie de la commande optimale permet de déterminer la commande d'un système qui minimise (ou maximise) un critère de performance, éventuellement sous des contraintes pouvant porter sur la commande ou sur l'état du système. Cette théorie est une généralisation du calcul des variations. Elle comporte deux volets : le principe du maximum (ou du minimum, suivant la manière dont on définit l'hamiltonien) dû à Lev Pontriaguine et à ses collaborateurs de l'institut de mathématiques Steklov , et l'équation de Hamilton-Jacobi-Bellman, généralisation de l'équation de Hamilton-Jacobi, et conséquence directe de la programmation dynamique initiée aux États-Unis par Richard Bellman.
This article investigates the optimal containment control problem for a class of heterogeneous multi-agent systems with time-varying actuator faults and unmatched disturbances based on adaptive dynamic programming. Since there exist unknown input signals i ...
WILEY2023
,
We present a combination technique based on mixed differences of both spatial approximations and quadrature formulae for the stochastic variables to solve efficiently a class of optimal control problems (OCPs) constrained by random partial differential equ ...
2024
,
In the context of SARS-CoV-2 pandemic, mathematical modelling has played a funda-mental role for making forecasts, simulating scenarios and evaluating the impact of pre-ventive political, social and pharmaceutical measures. Optimal control theory represent ...