**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.

Concept# Simplex algorithm

Summary

In mathematical optimization, Dantzig's simplex algorithm (or simplex method) is a popular algorithm for linear programming.
The name of the algorithm is derived from the concept of a simplex and was suggested by T. S. Motzkin. Simplices are not actually used in the method, but one interpretation of it is that it operates on simplicial cones, and these become proper simplices with an additional constraint. The simplicial cones in question are the corners (i.e., the neighborhoods of the vertices) of a geometric object called a polytope. The shape of this polytope is defined by the constraints applied to the objective function.
George Dantzig worked on planning methods for the US Army Air Force during World War II using a desk calculator. During 1946 his colleague challenged him to mechanize the planning process to distract him from taking another job. Dantzig formulated the problem as linear inequalities inspired by the work of Wassily Leontief, however, at that time he didn't include an objective as part of his formulation. Without an objective, a vast number of solutions can be feasible, and therefore to find the "best" feasible solution, military-specified "ground rules" must be used that describe how goals can be achieved as opposed to specifying a goal itself. Dantzig's core insight was to realize that most such ground rules can be translated into a linear objective function that needs to be maximized. Development of the simplex method was evolutionary and happened over a period of about a year.
After Dantzig included an objective function as part of his formulation during mid-1947, the problem was mathematically more tractable. Dantzig realized that one of the unsolved problems that he had mistaken as homework in his professor Jerzy Neyman's class (and actually later solved), was applicable to finding an algorithm for linear programs. This problem involved finding the existence of Lagrange multipliers for general linear programs over a continuum of variables, each bounded between zero and one, and satisfying linear constraints expressed in the form of Lebesgue integrals.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related courses (16)

Related publications (120)

Related people (27)

Ontological neighbourhood

MATH-261: Discrete optimization

This course is an introduction to linear and discrete optimization.
Warning: This is a mathematics course! While much of the course will be algorithmic in nature, you will still need to be able to p

MGT-483: Optimal decision making

This course introduces the theory and applications of optimization. We develop tools and concepts of optimization and decision analysis that enable managers in manufacturing, service operations, marke

PHYS-442: Modeling and design of experiments

In the academic or industrial world, to optimize a system, it is necessary to establish strategies for the experimental approach. The DOE allows you to choose the best set of measurement points to min

, , , , , , , , ,

Related concepts (14)

Related MOOCs (6)

Related lectures (81)

Related units (3)

Simon Nessim Henein, Florent Cosandier, Loïc Benoît Tissot-Daguette, Etienne Frédéric Gabriel Thalmann

Flexure pivots, which are widely used for precision mechanisms, generally have the drawback of presenting parasitic shifts accompanying their rotation. The known solutions for canceling these undesirable parasitic translations usually induce a loss in radi ...

2024, , ,

This paper studies the problem of online performance optimization of constrained closed-loop control systems, where both the objective and the constraints are unknown black-box functions affected by exogenous time-varying contextual disturbances. A primal- ...

Colin Neil Jones, Yuning Jiang, Yingzhao Lian

Nonlinear model predictive control (NMPC) has been widely adopted to manipulate bilinear systems with dynamics that include products of the inputs and the states. These systems are ubiquitous in chemical processes, mechanical systems, and quantum physics, ...

Interior-point method

Interior-point methods (also referred to as barrier methods or IPMs) are a certain class of algorithms that solve linear and nonlinear convex optimization problems. An interior point method was discovered by Soviet mathematician I. I. Dikin in 1967 and reinvented in the U.S. in the mid-1980s. In 1984, Narendra Karmarkar developed a method for linear programming called Karmarkar's algorithm, which runs in provably polynomial time and is also very efficient in practice.

Karmarkar's algorithm

Karmarkar's algorithm is an algorithm introduced by Narendra Karmarkar in 1984 for solving linear programming problems. It was the first reasonably efficient algorithm that solves these problems in polynomial time. The ellipsoid method is also polynomial time but proved to be inefficient in practice. Denoting as the number of variables and as the number of bits of input to the algorithm, Karmarkar's algorithm requires operations on -digit numbers, as compared to such operations for the ellipsoid algorithm.

Integer programming

An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective function and the constraints (other than the integer constraints) are linear. Integer programming is NP-complete. In particular, the special case of 0-1 integer linear programming, in which unknowns are binary, and only the restrictions must be satisfied, is one of Karp's 21 NP-complete problems.

Introduction to linear optimization, duality and the simplex algorithm.

Introduction to linear optimization, duality and the simplex algorithm.

Introduction to network optimization and discrete optimization

Linear Programming: Weighted Bipartite Matching

Covers linear programming, weighted bipartite matching, and vertex cover problems in optimization.

Response Surfaces I: LoF and Plans

Explores lack of fit concept and experimental designs for second-degree functions.

Optimal Decision Making: Sensitivity Analysis

Covers sensitivity analysis in linear programming, focusing on optimal solutions and their sensitivities to changes.