**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Concept# Simulation-based optimization

Résumé

Simulation-based optimization (also known as simply simulation optimization) integrates optimization techniques into simulation modeling and analysis. Because of the complexity of the simulation, the objective function may become difficult and expensive to evaluate. Usually, the underlying simulation model is stochastic, so that the objective function must be estimated using statistical estimation techniques (called output analysis in simulation methodology).
Once a system is mathematically modeled, computer-based simulations provide information about its behavior. Parametric simulation methods can be used to improve the performance of a system. In this method, the input of each variable is varied with other parameters remaining constant and the effect on the design objective is observed. This is a time-consuming method and improves the performance partially. To obtain the optimal solution with minimum computation and time, the problem is solved iteratively where in each iteration the

Source officielle

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Publications associées

Chargement

Personnes associées

Chargement

Unités associées

Chargement

Concepts associés

Chargement

Cours associés

Chargement

Séances de cours associées

Chargement

Publications associées (36)

Concepts associés

Aucun résultat

Chargement

Chargement

Chargement

Personnes associées (3)

Unités associées (2)

Cours associés (4)

ME-454: Modelling and optimization of energy systems

The goal of the lecture is to present and apply techniques for the modelling and the thermo-economic optimisation of industrial process and energy systems. The lecture covers the problem statement, the solving methods for the simulation and the single and multi-objective optimisation problems.

ME-474: Numerical flow simulation

This course provides practical experience in the numerical simulation of fluid flows. Numerical methods are presented in the framework of the finite volume method. A simple solver is developed with Matlab, and a commercial software is used for more complex problems.

MICRO-470: Scaling laws & simulations in micro & nanosystems

This class combines an analytical and finite elements modeling (FEM) simulations approach to scaling laws in MEMS/NEMS. The dominant physical effects and scaling effects when downsizing sensors and actuators in microsystems are discussed, across a broad range of actuation principles.

Séances de cours associées (7)

Many different algorithms developed in statistical physics, coding theory, signal processing, and artificial intelligence can be expressed by graphical models and solved (either exactly or approximately) with iterative message-passing algorithms on the model. One quantity of interest in these algorithms is the partition function. In graphical models without cycle (trees), the partition function can be computed efficiently by means of message-passing algorithms, like GDL or the sum-product algorithm. In contrast, when graphical models contain cycles, the computation of the partition function is in general intractable. Our contributions in this dissertation are: deriving deterministic upper and lower bounds on partition function, and developing methods to approximate this quantity with Monte Carlo methods. Specifically, we obtain subtree-based upper and lower bounds which lead to theoretical results on optimality properties of the minimum entropy sub-tree and finally lead to a greedy algorithm. At last, we introduce and analyze a number of estimators that use Gibbs sampling to draw samples from different target distributions to estimate the value of the partition function. In one estimator, we demonstrate a novel strategy which combines Gibbs sampling and message-passing algorithms on trees.

The research community has been making significant progress in hardware implementation, numerical computing and algorithm development for optimization-based control. However, there are two key challenges that still have to be overcome for optimization-based control to be a viable option in the context of advanced industrial applications. First, the large existing gap between algorithm development and its deployment on platforms used by practitioners in industry. Second, from a more theoretical viewpoint, the lack of robustness of certain approaches, which are based on the unreasonable assumption that the model at hand perfectly represents the object under investigation. This thesis addresses the aforementioned challenges by establishing software toolboxes for automatic code generation, and proposing a data-driven methodology to enhance the performance of real-time optimization strategies during operation.
The first part of this thesis focuses on the efficient implementation of Model Predictive Control (MPC) based on first-order operator splitting methods. Because of the cheap numerical operations associated with them, splitting methods are favorable candidates for applications with limited computing power. We first identify the computational bottlenecks and, subsequently, discuss their efficient deployment on processors, Field Programmable Gate Arrays (FPGA), and heterogeneous platforms. For rapid prototyping and deployment, two code generation toolboxes are developed: SPLIT and LAFF. These possess a high-level parsing interface for MATLAB and yield optimized C code that can be directly used in a variety of FPGA platforms. Features such as pipelining, memory partitioning, and parallelization are automatically incorporated, not requiring users to have in-depth knowledge about computer architecture and low-level programming. We then propose a framework to a priori solve the co-design problem arising in splitting method-based MPC to provide trade-offs between resources and latency. We provide analytical expressions that can avoid the daunting and time-consuming task of exploring the design space manually, thus reducing the final application development time.
The second part of the thesis deals with learning plant-model mismatch using Gaussian processes (GPs) in Real Time Optimization (RTO) schemes. Inaccurate models, the presence of disturbances, and time-varying conditions typically lead to the suboptimal operation of many plants. We use data-driven global surrogate models in the form of GPs to cope with such problems and show better numerical convergence and handling of noise effectively when compared to standard RTO techniques. We moreover prove that GPs can be certified as probabilistic and deterministic fully linear models, a key property to guarantee global convergence of derivative-free trust region (DFT) methods. We then propose a novel DFT methodology to incorporate noise, which requires less plant evaluations than other alternatives. Finally, we conclude this work by performing experiments on a Solid-Oxide Fuel Cell system.

This thesis deals with dynamic optimization of batch processes, i.e. processes that are characterized by a finite time of operation and the frequent repetition of batches. The objective is to maximize the quantity of the desired product at final time while satisfying constraints during the batch (path constraints) and at final time (terminal constraints). The classical approach is to apply, in an open-loop fashion, input profiles that have been determined off-line using optimization. In practical applications, however, a conservative stand has to be taken since uncertainty is present, which may lead to constraint violations or non-optimal operation. Measurement-based optimization helps reduce this conservatism by using measurements to compensate for the effect of uncertainty. In order to achieve optimal operation, it has been proposed to track the necessary conditions of optimality (NCO). For the terminal cost optimization of batch processes, the NCO consist in four parts : Path constraints and path sensitivity conditions that have to be met during the batch, as well as terminal constraints and terminal sensitivity conditions that have to be met at final time. The intuitive approach is to handle the path conditions during the batch by using on-line measurements, and to implement the terminal conditions on a run-to-run basis since measurements of the terminal conditions are available off-line. However, run-to-run adaptation methods have two important disadvantages : i) Several runs are necessary to obtain optimal operation, and ii) within-run perturbations cannot be compensated. In this thesis, it is shown via a variational analysis of the NCO that meeting the constraint parts of the NCO is more important than meeting the sensitivity parts. Therefore, methods for steering the system toward the terminal constraints by using on-line measurements are examined in order to give priority to meeting the constraint-seeking conditions. This work proposes to track run-time references that lead to the constraints at final time. Thus, a two-time-scale methodology is proposed, where the task of meeting the active terminal constraints is addressed on-line using trajectory tracking, whilst pushing the sensitivities to zero is implemented on a run-to-run basis. Moreover, the use of Iterative Learning Control (ILC) to improve tracking is studied. Reference tracking is often accomplished with linear controllers of the PID type since they are widely accepted in industry and easy to implement. However, as batch processes operate over a wide region and are often highly nonlinear systems, tracking errors are inevitable. A possibility to reduce tracking errors is to use time-varying feedforward inputs. The feedforward inputs can be generated by using a process model, but the presence of model uncertainty can limit the performance. Instead of a process model, ILC uses error information from previous batches for computing the feedforward inputs iteratively. Since batch processes are often not reset to identical initial conditions, ILC schemes that provide a certain robustness to errors in initial conditions must be used, such as ILC with forgetting factor. In this work, a scheme that shifts the input of the previous run backward in time is proposed. The advantage of the input shift over the use of a forgetting factor is that, when the reference trajectory is constant and the system stable, the tracking error decreases with run time. In addition to a shift of the input, a shift of the error trajectory, which is known as anticipatory ILC, is also used. The foregoing methodological developments are illustrated by two case studies, a simulated semi-batch reactor and an experimental laboratory-scale batch distillation column. It is shown that meeting the terminal constraints is the most important optimality condition for the processes under study. Provided that selected composition measurements are available on-line, these measurements are used to steer the processes to the desired terminal constraints on product composition. In comparison to run-to-run optimization schemes, performance is improved from the first batch on, and within-run perturbations can be rejected.