**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Convex optimization

Summary

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets). Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.
Convex optimization has applications in a wide range of disciplines, such as automatic control systems, estimation and signal processing, communications and networks, electronic circuit design, data analysis and modeling, finance, statistics (optimal experimental design), and structural optimization, where the approximation concept has proven to be efficient.
With recent advancements in computing and optimization algorithms, convex programming is nearly as straightforward as linear programming.
Definition
A convex optimization problem is an optimization problem in which the objective function is a convex function and the feasible set is a

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related people (57)

Related units (25)

Related publications (100)

Related concepts (25)

Mathematical optimization

Mathematical optimization (alternatively spelled optimisation) or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternative

Linear programming

Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented

Nonlinear programming

In mathematics, nonlinear programming (NLP) is the process of solving an optimization problem where some of the constraints or the objective function are nonlinear. An optimization problem is one of

Loading

Loading

Loading

Related courses (45)

ME-524: Advanced control systems

This course covers some theoretical and practical aspects of robust and adaptive control. This includes H-2 and H-infinity control in model-based and data-driven framework by convex optimization, direct, indirect and switching adaptive control. The methods are implemented in a hands-on lab.

MATH-329: Continuous optimization

This course introduces students to continuous, nonlinear optimization. We study the theory of optimization with continuous variables (with full proofs), and we analyze and implement important algorithms to solve constrained and unconstrained problems.

EE-556: Mathematics of data: from theory to computation

This course provides an overview of key advances in continuous optimization and statistical analysis for machine learning. We review recent learning formulations and models as well as their guarantees, describe scalable solution techniques and algorithms, and illustrate the trade-offs involved.

Related lectures (111)

Nicolas Boumal, Christopher Arnold Criscitiello

We describe the first gradient methods on Riemannian manifolds to achieve accelerated rates in the non-convex case. Under Lipschitz assumptions on the Riemannian gradient and Hessian of the cost function, these methods find approximate first-order critical points faster than regular gradient descent. A randomized version also finds approximate second-order critical points. Both the algorithms and their analyses build extensively on existing work in the Euclidean case. The basic operation consists in running the Euclidean accelerated gradient descent method (appropriately safe-guarded against non-convexity) in the current tangent space, then moving back to the manifold and repeating. This requires lifting the cost function from the manifold to the tangent space, which can be done for example through the Riemannian exponential map. For this approach to succeed, the lifted cost function (called the pullback) must retain certain Lipschitz properties. As a contribution of independent interest, we prove precise claims to that effect, with explicit constants. Those claims are affected by the Riemannian curvature of the manifold, which in turn affects the worst-case complexity bounds for our optimization algorithms.

In the first part, we first introduce steganography (in chapter 1) not in the usual context of information security, but as a method to piggyback data on top of some content. We then focus on audio steganography and propose a new steganographic scheme in chapter 2 as well as a model for the noisy, analog communication channel we are considering (in section 3.2). The method we use is based on signal mixing, the science of constructing vectors in a way that their sum can then be split back into the original components. This is presented in chapter 4. The data recovery, or demixing, relies on convex optimization (presented in chapter 5) and some further signal processing detailed in chapter 6. In the second part we present our proof of concept implementation and show the results of simulation runs that have been made in order to study the properties of the overall system (chapter 7). We study how the different parameters of the system and the communication channel affect the performance of the steganographic scheme.Finally, we draw conclusions (chapter 8) about the findings and suggest (in chapter 9) what next steps could be taken in order to further study this steganographic scheme.

2014