**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Personne# Saurabh Anand Deshpande

Cette personne n’est plus à l’EPFL

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Domaines de recherche associés (2)

Publications associées (7)

Commande optimale

La théorie de la commande optimale permet de déterminer la commande d'un système qui minimise (ou maximise) un critère de performance, éventuellement sous des contraintes pouvant porter sur la commande ou sur l'état du système. Cette théorie est une généralisation du calcul des variations. Elle comporte deux volets : le principe du maximum (ou du minimum, suivant la manière dont on définit l'hamiltonien) dû à Lev Pontriaguine et à ses collaborateurs de l'institut de mathématiques Steklov , et l'équation de Hamilton-Jacobi-Bellman, généralisation de l'équation de Hamilton-Jacobi, et conséquence directe de la programmation dynamique initiée aux États-Unis par Richard Bellman.

Adaptation (biologie)

En biologie, l'adaptation peut se définir d’une manière générale comme l’ajustement fonctionnel de l’être vivant au milieu, et, en particulier, comme l’appropriation de l’organe à sa fonction. L’adaptation correspond à la mise en accord d'un organisme vivant avec les conditions qui lui sont extérieures. Elle perfectionne ses organes, les rend plus aptes au rôle qu’ils semblent jouer dans la vie de l’individu. Elle met l’organisme tout entier en cohérence avec le milieu.

This thesis addresses the problem of industrial real-time process optimization that suffers from the presence of uncertainty. Since a process model is typically used to compute the optimal operating conditions, both plant-model mismatch and process disturbances can result in suboptimal or, worse, infeasible operation. Hence, for practical applications, methodologies that help avoid re-optimization during process operation, at the cost of an acceptable optimality loss, become important. The design and analysis of such approximate solution strategies in real-time optimization (RTO) demand a careful analysis of the components of the necessary conditions of optimality. This thesis analyzes the role of constraints in process optimality in the presence of uncertainty. This analysis is made in two steps. Firstly, a general analysis is developed to quantify the effect of input adaptation on process performance for static RTO problems. In the second part, the general features of input adaptation for dynamic RTO problems are analyzed with focus on the constraints. Accordingly, the thesis is organized in two parts: For static RTO, a joint analysis of the model optimal inputs, the plant optimal inputs and a class of adapted inputs, and For dynamic RTO, an analytical study of the effect of local adaptation of the model optimal inputs. The first part (Chapters 2 and 3) addresses the problem of adapting the inputs to optimize the plant. The investigation takes a constructive viewpoint, but it is limited to static RTO problems modeled as parametric nonlinear programming (pNLP) problems. In this approach, the inputs are not limited to being local adaptation of the model optimal inputs but, instead, they can change significantly to optimize the plant. Hence, one needs to consider the fact that the set of active constraints for the model and the plant can be different. It is proven that, for a wide class of systems, the detection of a change in the active set contributes only negligibly to optimality, as long as the adapted solution remains feasible. More precisely, if η denotes the magnitude of the parametric variations and if the linear independence constraint qualification (LICQ) and strong second-order sufficient condition (SSOSC) hold for the underlying pNLP, the optimality loss due to any feasible input that conserves only the strict nominal active set is of magnitude O(η2), irrespective of whether or not there is a change in the set of active constraints. The implication of this result for a static RTO algorithm is to prioritize the satisfaction of only a core set of constraints, as long as it is possible to meet the feasibility requirements. The second part (Chapters 4 and 5) of the thesis deals with a way of adapting the model optimal inputs in dynamic RTO problems. This adaptation is made along two sets of directions such that one type of adaptation does not affect the nominally active constraints, while the other does. These directions are termed the sensitivity-seeking (SS) and the constraint-seeking (CS) directions, respectively. The SS and CS directions are defined as elements of a fairly general function space of input variations. A mathematical criterion is derived to define SS directions for a general class of optimal control problems involving both path and terminal constraints. According to this criterion, the SS directions turn out to be solutions of linear integral equations that are completely defined by the model optimal solution. The CS directions are then chosen orthogonal to the subspace of SS directions, where orthogonality is defined with respect to a chosen inner product on the space of input variations. It follows that the corresponding subspaces are infinite-dimensional subspaces of the function space of input variations. It is proven that, when uncertainty is modeled in terms of small parametric variations, the aforementioned classification of input adaptation leads to clearly distinguishable cost variations. More precisely, if η denotes the magnitude of the parametric variations, adaptation of the model optimal inputs along SS directions causes a cost variation of magnitude O(η2). On the other hand, the cost variation due to input adaptation along CS directions is of magnitude O(η). Furthermore, a numerical procedure is proposed for computing the SS and CS components of a given input variation. These components are projections of the input variation on the infinite-dimensional subspaces of SS and CS directions. The numerical procedure consists of the following three steps: approximation of the optimal control problem by a pNLP problem, projection of the given direction on the finite-dimensional SS and CS subspaces of the pNLP and, finally, reconstruction of the SS and CS components of the original problem from those of the pNLP.

Dominique Bonvin, Benoît Chachuat, Saurabh Anand Deshpande

This paper deals with input adaptation in dynamic processes in order to guarantee feasible and optimal operation despite the presence of uncertainty. The proposed adaptation consists in using the nominal optimal inputs and adding appropriately designed input variation functions. For optimal control problems having both terminal and mixed control-state path constraints, two orthogonal sets of directions can be distinguished in the space of input variation functions: the so-called sensitivity-seeking directions, along which a small variation will not affect the respective active constraints, and the complementary constraint-seeking directions, along which a variation will affect the respective constraints. It is shown that the sensitivity-seeking directions satisfy certain linear integral equations. Two selective input adaptation strategies are then defined, namely, adaptation in the sensitivity- and constraint-seeking directions. This paper proves the important result that, for small parametric perturbations, the cost variation resulting from adaptation in the sensitivity-seeking directions (over no input adaptation) is typically smaller than that due to adaptation in the constraint-seeking directions.

Dominique Bonvin, Saurabh Anand Deshpande

In real-time optimization, enforcing the constraints that need to be active is important for optimality. In fact, it has been established in the context of parametric variations that, if these constraints are not satisfied, the optimality loss would be O($\eta^2$) – denoting the magnitude of the parametric variations. In contrast, the loss of optimality upon enforcing the correct set of active constraints would be O($\eta^2$). However, no result is available when the set of active constraints changes due to parametric variations, which forms the subject of this paper. Herein it is shown that, if the optimal solution is unique for each , keeping only the strictly active constraints of the nominal solution active will lead to O($\eta^2$) loss in optimality, even when the remaining active constraints of the perturbed system are different from that of the nominal system. This, in turn, means that, in any input adaptation scheme for real-time optimization, identifying changes in active constraints is not important as long as it is possible to enforce the strictly active constraints of the nominal solution to remain active.

2011