Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets). Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard. Convex optimization has applications in a wide range of disciplines, such as automatic control systems, estimation and signal processing, communications and networks, electronic circuit design, data analysis and modeling, finance, statistics (optimal experimental design), and structural optimization, where the approximation concept has proven to be efficient. With recent advancements in computing and optimization algorithms, convex programming is nearly as straightforward as linear programming. A convex optimization problem is an optimization problem in which the objective function is a convex function and the feasible set is a convex set. A function mapping some subset of into is convex if its domain is convex and for all and all in its domain, the following condition holds: . A set S is convex if for all members and all , we have that . Concretely, a convex optimization problem is the problem of finding some attaining where the objective function is convex, as is the feasible set . If such a point exists, it is referred to as an optimal point or solution; the set of all optimal points is called the optimal set. If is unbounded below over or the infimum is not attained, then the optimization problem is said to be unbounded. Otherwise, if is the empty set, then the problem is said to be infeasible. A convex optimization problem is in standard form if it is written as where: is the optimization variable; The objective function is a convex function; The inequality constraint functions , , are convex functions; The equality constraint functions , , are affine transformations, that is, of the form: , where is a vector and is a scalar. This notation describes the problem of finding that minimizes among all satisfying , and , .

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (33)
MATH-329: Continuous optimization
This course introduces students to continuous, nonlinear optimization. We study the theory of optimization with continuous variables (with full proofs), and we analyze and implement important algorith
ME-425: Model predictive control
Provide an introduction to the theory and practice of Model Predictive Control (MPC). Main benefits of MPC: flexible specification of time-domain objectives, performance optimization of highly complex
MGT-483: Optimal decision making
This course introduces the theory and applications of optimization. We develop tools and concepts of optimization and decision analysis that enable managers in manufacturing, service operations, marke
Show more
Related lectures (817)
Optimal Control: Unconstrained and Constrained Problems
Explores optimal control in unconstrained and constrained problems, emphasizing the importance of sparsity.
KKT and Convex Optimization
Covers the KKT conditions and convex optimization, discussing constraint qualifications and tangent cones of convex sets.
Constrained Convex Optimization: Min-Max Formulation and Fenchel Conjugation
Explores constrained convex optimization with min-max formulation and Fenchel conjugation.
Show more
Related publications (1,000)

Perturbed Utility Stochastic Traffic Assignment

This paper develops a fast algorithm for computing the equilibrium assignment with the perturbed utility route choice (PURC) model. Without compromise, this allows the significant advantages of the PURC model to be used in large-scale applications. We form ...
Informs2024

The effect of smooth parametrizations on nonconvex optimization landscapes

Nicolas Boumal

We develop new tools to study landscapes in nonconvex optimization. Given one optimization problem, we pair it with another by smoothly parametrizing the domain. This is either for practical purposes (e.g., to use smooth optimization algorithms with good g ...
Springer Heidelberg2024

BENIGN LANDSCAPES OF LOW-DIMENSIONAL RELAXATIONS FOR ORTHOGONAL SYNCHRONIZATION ON GENERAL GRAPHS

Nicolas Boumal

Orthogonal group synchronization is the problem of estimating n elements Z(1),& mldr;,Z(n) from the rxr orthogonal group given some relative measurements R-ij approximate to Z(i)Z(j)(-1). The least-squares formulation is nonconvex. To avoid its local minim ...
Siam Publications2024
Show more
Related concepts (20)
Lagrange multiplier
In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equation constraints (i.e., subject to the condition that one or more equations have to be satisfied exactly by the chosen values of the variables). It is named after the mathematician Joseph-Louis Lagrange. The basic idea is to convert a constrained problem into a form such that the derivative test of an unconstrained problem can still be applied.
Gradient descent
In mathematics, gradient descent (also often called steepest descent) is a iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the gradient (or approximate gradient) of the function at the current point, because this is the direction of steepest descent. Conversely, stepping in the direction of the gradient will lead to a local maximum of that function; the procedure is then known as gradient ascent.
Mathematical optimization
Mathematical optimization (alternatively spelled optimisation) or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.
Show more
Related MOOCs (13)
Introduction to optimization on smooth manifolds: first order methods
Learn to optimize on smooth, nonlinear spaces: Join us to build your foundations (starting at "what is a manifold?") and confidently implement your first algorithm (Riemannian gradient descent).
Digital Signal Processing [retired]
The course provides a comprehensive overview of digital signal processing theory, covering discrete time, Fourier analysis, filter design, sampling, interpolation and quantization; it also includes a
Digital Signal Processing I
Basic signal processing concepts, Fourier analysis and filters. This module can be used as a starting point or a basic refresher in elementary DSP
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.