**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# Neighboring extremals in optimization and control

Abstract

Optimization arises naturally when process performance needs improvement. This is often the case in industry because of competition – the product has to be proposed at the lowest possible cost. From the point of view of control, optimization consists in designing a control policy that best satisfies the chosen objectives. Most optimization schemes rely on a process model, which, however, is always an approximation of the real plant. Hence, the resulting optimal control policy is suboptimal for the real process. The fact that accurate models can be prohibitively expensive to build has triggered the development of a field of research known as Optimization under Uncertainty. One promising approach in this field proposes to draw a strong parallel between optimization under uncertainty and control. This approach, labeled NCO tracking, considers the Necessary Conditions of Optimality (NCO) of the optimization problem as the controlled outputs. The approach is still under development, and the present work is today's most recent contribution to this development. The problem of NCO tracking can be divided into several subproblems that have been studied separately in earlier works. Two main categories can be distinguished : (i) tracking the NCO associated with active constraints, and (ii) tracking the NCO associated with sensitivities. Research on the former category is mature. The latter problem is more difficult to solve since the sensitivity part of the NCO cannot be directly measured on the real process. The present work proposes a method to tackle these sensitivity problems based on the theory of Neighboring Extremals (NE). More precisely, NE control provides a way of calculating a first-order approximation to the sensitivity part of the NCO. This idea is developed for static and both nonsingular and singular dynamic optimization problems. The approach is illustrated via simulated examples: steady-state optimization of a continuous chemical reactor, optimal control of a semi-batch reactor, and optimal control of a steered car. Model Predictive Control (MPC) is a control scheme that can accommodate both process constraints and nonlinear process models. The repeated solution of a dynamic optimization problem provides an update of the control variables based on the current state, and therefore provides feedback. One of the major drawbacks of MPC lies in the expensive computations required to update the control policy, which often results in a low sampling frequency for the control loop. This limitation of the sampling frequency can be dramatic for fast systems and for systems exhibiting a strong dispersion between the predicted and the real state such as unstable systems. In the MPC framework, two main methods have been proposed to tackle these difficulties: (i) The use of a pre-stabilizing feedback operating in combination with the MPC scheme, and (ii) the use of robust MPC. The drawback of the former approach is that there exists no systematic way of designing such a feedback, nor is there any systematic way of analyzing the interaction between the MPC controller and this additional feedback. This work proposes to use the NE theory to design this additional feedback, and it provides a systematic way of analyzing the resulting control scheme. The approach is illustrated via the control of a simulated unstable continuous stirred-tank reactor and is applied successfully to two laboratory-scale set-ups, an inverted pendulum and a helicopter model called Toycopter. The stabilizing potential of NE control to handle fast and unstable systems is well illustrated. In the case of a strong dispersion between the state trajectories predicted by the model and the real process, robust MPC becomes infeasible. This problem can be addressed using robust MPC based on multiple input profiles, where the inherent feedback provided by MPC is explicitly taken into account, thereby increasing the size of the set of feasible inputs. The drawback of this scheme is its very high computational complexity. This work proposes to use the NE theory in the robust MPC framework as an efficient way of dealing with the feasibility issue, while limiting the computational complexity of the approach. The approach is illustrated via the control of a simulated unstable continuous stirred-tank reactor, and of an inverted pendulum.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related MOOCs (25)

Related concepts (33)

Related publications (210)

Model predictive control

Model predictive control (MPC) is an advanced method of process control that is used to control a process while satisfying a set of constraints. It has been in use in the process industries in chemical plants and oil refineries since the 1980s. In recent years it has also been used in power system balancing models and in power electronics. Model predictive controllers rely on dynamic models of the process, most often linear empirical models obtained by system identification.

Optimal control

Optimal control theory is a branch of mathematical optimization that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the moon with minimum fuel expenditure.

Mathematical optimization

Mathematical optimization (alternatively spelled optimisation) or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.

Digital Signal Processing I

Basic signal processing concepts, Fourier analysis and filters. This module can
be used as a starting point or a basic refresher in elementary DSP

Digital Signal Processing II

Adaptive signal processing, A/D and D/A. This module provides the basic
tools for adaptive filtering and a solid mathematical framework for sampling and
quantization

Digital Signal Processing III

Advanced topics: this module covers real-time audio processing (with
examples on a hardware board), image processing and communication system design.

Fabio Zoccolan, Gianluigi Rozza

In this paper we will consider distributed Linear-Quadratic Optimal Control Problems dealing with Advection-Diffusion PDEs for high values of the Peclet number. In this situation, computational instabilities occur, both for steady and unsteady cases. A Str ...

David Atienza Alonso, Amir Aminifar, Alireza Amirshahi, José Angel Miranda Calero, Jonathan Dan

The rapid development of wearable biomedical systems now enables real-time monitoring of electroencephalography (EEG) signals. Acquisition of these signals relies on electrodes. These systems must meet the design challenge of selecting an optimal set of el ...

2024Drones hold promise to assist in civilian tasks. To realize this application, future drones must operate within large cities, covering large distances while navigating within cluttered urban landscapes. The increased efficiency of winged drones over rotary ...