**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Stochastic programming

Summary

In the field of mathematical optimization, stochastic programming is a framework for modeling optimization problems that involve uncertainty. A stochastic program is an optimization problem in which some or all problem parameters are uncertain, but follow known probability distributions. This framework contrasts with deterministic optimization, in which all problem parameters are assumed to be known exactly. The goal of stochastic programming is to find a decision which both optimizes some criteria chosen by the decision maker, and appropriately accounts for the uncertainty of the problem parameters. Because many real-world decisions involve uncertainty, stochastic programming has found applications in a broad range of areas ranging from finance to transportation to energy optimization.
Two-stage problems
The basic idea of two-stage stochastic programming is that (optimal) decisions should be based on data available at the time the decisions are made and cannot depend on f

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related publications (62)

Loading

Loading

Loading

Related people (5)

Related units (8)

Related concepts (4)

Linear programming

Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented

Mathematical optimization

Mathematical optimization (alternatively spelled optimisation) or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternative

General algebraic modeling system

The general algebraic modeling system (GAMS) is a high-level modeling system for mathematical optimization. GAMS is designed for modeling and solving linear, nonlinear, and mixed-integer optimizatio

Related courses (5)

MGT-483: Optimal decision making

This course introduces the theory and applications of optimization. We develop tools and concepts of optimization and decision analysis that enable managers in manufacturing, service operations, marketing, transportation and finance to transform data into insights for making better decisions.

EE-556: Mathematics of data: from theory to computation

This course provides an overview of key advances in continuous optimization and statistical analysis for machine learning. We review recent learning formulations and models as well as their guarantees, describe scalable solution techniques and algorithms, and illustrate the trade-offs involved.

CS-456: Artificial neural networks/reinforcement learning

Since 2010 approaches in deep learning have revolutionized fields as diverse as computer vision, machine learning, or artificial intelligence. This course gives a systematic introduction into influential models of deep artificial neural networks, with a focus on Reinforcement Learning.

Daniel Kuhn, Mengmeng Li, Tobias Sutter

We study a stochastic program where the probability distribution of the uncertain problem parameters is unknown and only indirectly observed via finitely many correlated samples generated by an unknown Markov chain with d states. We propose a data-driven distributionally robust optimization model to estimate the problem’s objective function and optimal solution. By leveraging results from large deviations theory, we derive statistical guarantees on the quality of these estimators. The underlying worst-case expectation problem is nonconvex and involves O(d^2) decision variables. Thus, it cannot be solved efficiently for large d. By exploiting the structure of this prob- lem, we devise a customized Frank-Wolfe algorithm with convex direction-finding subproblems of size O(d). We prove that this algorithm finds a stationary point efficiently under mild conditions. The efficiency of the method is predicated on a dimensionality reduction enabled by a dual reformulation. Numerical experiments indicate that our approach has better computational and statistical properties than the state-of-the-art methods.

2021This paper presents a real-world application of stochastic programming in the Swiss hydro-based system. The seasonal hydrological conditions may cause insufficient market volume to an extent that security criteria (i.e., the criteria determining of reserve amount) may not be met. In this situations, the question arises to what extend the current pre-defined security criteria can be relaxed while respecting the current market properties. We provide a framework accounting for the trade-off between the expected procurement cost of reserves and security criteria. Additionally, this paper provides a risk-averse formulation of the current clearing model. The proposed approach is simulated using data of the Swiss reserve market to illustrate technical and economic aspects.

Daniel Kuhn, Wolfram Wiesemann

Stochastic programming and distributionally robust optimization seek deterministic decisions that optimize a risk measure, possibly in view of the most adverse distribution in an ambiguity set. We investigate under which circumstances such deterministic decisions are strictly outperformed by random decisions which depend on a randomization device producing uniformly distributed samples that are independent of all uncertain factors affecting the decision problem. We find that in the absence of distributional ambiguity, deterministic decisions are optimal if both the risk measure and the feasible region are convex, or alternatively if the risk measure is mixture-quasiconcave. We show that several classes of risk measures, such as mean (semi-)deviation and mean (semi-)moment measures, fail to be mixture-quasiconcave and can therefore give rise to problems in which the decision maker benefits from randomization. Under distributional ambiguity, on the other hand, we show that for any ambiguity averse risk measure there always exists a decision problem (with a nonconvex—e.g., mixed-integer—feasible region) in which a randomized decision strictly dominates all deterministic decisions.

2019Related lectures (11)