**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Mathematical optimization

Summary

Mathematical optimization (alternatively spelled optimisation) or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.
In the more general approach, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function. The generalization of optimization theory and techniques to other formulations constitutes a large area of applied mathematics. More generally, optimization includes finding "best available" values of some objective function given a defined domain (or input),

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related publications (100)

Loading

Loading

Loading

Related people (278)

Related units (81)

Related courses (204)

MATH-265: Introduction to optimization and operations research

Introduction to major operations research models and optimization algorithms

MGT-418: Convex optimization

This course introduces the theory and application of modern convex optimization from an engineering perspective.

EE-556: Mathematics of data: from theory to computation

This course provides an overview of key advances in continuous optimization and statistical analysis for machine learning. We review recent learning formulations and models as well as their guarantees, describe scalable solution techniques and algorithms, and illustrate the trade-offs involved.

Related concepts (165)

Linear programming

Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented

Operations research

Operations research (operational research) (U.S. Air Force Specialty Code: Operations Analysis), often shortened to the initialism OR, is a discipline that deals with the development and application o

Convex optimization

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets

Related lectures (633)

2014

The electrical discharge machining process (EDM) was discovered in the 1950s, and was then used essentially to destroy unrecoverable damaged screws. Since then, huge progress has been achieved in making this process reliable and able to perform the most complex machining operations on the most sophisticated materials. Two main processes use electrical discharge machining. First, Die Sinking EDM (DSEDM) in which an electrode, moved along a usually vertical axis, makes an imprint into a mechanical element ; second, Wire EDM (WEDM), uses a wire as an electrode and makes it possible to perform cut operations. Two specific aspects of the EDM process make it particularly challenging for optimization. First, the process evolves with the machining position. In the DSEDM process, where the electrode sinks deeply into the material, the fragments spawn by erosion (contamination) are trapped, thus modifying the sparking conditions. In the WEDM process, the main factors that drive the evolution of the process are the machining operations. Second, the measurements are very noisy, which is due to underlying, mainly random, physical phenomenon ; this is particularly true to spark triggering. The process evolution influences the single criterion to be minimized : total machining time. However, the latter is only known once the operations are completed. As a result, the whole history of manipulated variables influences the final criterion to minimize in this case of dynamic optimization. A major contribution of this work is a proof that a first-order model of the DSEDM process, with the machining position as a state variable, makes it possible to transform a dynamic optimization problem into a static optimization problem. The tools used in this demonstration are Pontryagin's Minimum Principle as well as Parametric Programming. The conclusion is that in order to achieve minimum machining time, maximum speed must be sought all along the trajectory. The online search for maximum speed is another important contribution of this thesis. Noisy efficiency functions are, indeed, known to be a significant challenge to the reliability of optimization algorithms. To address this issue the Golden Ratio Search and the Nelder-Mead Simplex algorithms were chosen as the starting point, as they do not rely explicitly on the gradient of the efficiency function. The addition of a further dilation condition makes these algorithms more effective in stochastic mode. This condition is based on the detection of contradictory measurement samples as compared to the shape of the efficiency function, which is assumed to be unimodal. As a result of this modification, the density of the final optimization points is well centred relative to the theoretical optimum, and dispersion is small. Moreover, the size of the search region for both algorithms never approaches zero. Consequently, as the machining conditions evolve, the optimizer can target a new optimum. This adaptability proves to be a significant improvement over existing algorithms. In the case of the DSEDM process, a simple model of the efficiency surface as a function of the control variables, which has been calibrated on sample measures, has allowed for a validation of the static optimization. For the WEDM process, conclusive results for the modified algorithms have been obtained both by simulation and during machine-based tests.