**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Lecture# Convexifying Nonconvex Problems: SVM and Dimensionality Reduction

Description

This lecture covers the concept of convexifying nonconvex problems, focusing on Support Vector Machines (SVM) and nonlinear dimensionality reduction. It explains the primal and dual formulations of SVM, the kernel trick, and the use of Lagrange multipliers. Additionally, it delves into nonlinear dimensionality reduction techniques, such as constructing k-nearest neighborhood graphs and solving optimization problems to unfold high-dimensional data. The instructor also discusses the challenges of exact solutions for convex-cardinality problems and introduces the l₁-norm heuristic as an approximation method.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

In course

Instructor

Related concepts (42)

MGT-418: Convex optimization

This course introduces the theory and application of modern convex optimization from an engineering perspective.

Optimization problem

In mathematics, computer science and economics, an optimization problem is the problem of finding the best solution from all feasible solutions. Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete: An optimization problem with discrete variables is known as a discrete optimization, in which an object such as an integer, permutation or graph must be found from a countable set.

Convex optimization

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets). Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.

Combinatorial optimization

Combinatorial optimization is a subfield of mathematical optimization that consists of finding an optimal object from a finite set of objects, where the set of feasible solutions is discrete or can be reduced to a discrete set. Typical combinatorial optimization problems are the travelling salesman problem ("TSP"), the minimum spanning tree problem ("MST"), and the knapsack problem. In many such problems, such as the ones previously mentioned, exhaustive search is not tractable, and so specialized algorithms that quickly rule out large parts of the search space or approximation algorithms must be resorted to instead.

Lagrange multiplier

In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equation constraints (i.e., subject to the condition that one or more equations have to be satisfied exactly by the chosen values of the variables). It is named after the mathematician Joseph-Louis Lagrange. The basic idea is to convert a constrained problem into a form such that the derivative test of an unconstrained problem can still be applied.

Duality (optimization)

In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. If the primal is a minimization problem then the dual is a maximization problem (and vice versa). Any feasible solution to the primal (minimization) problem is at least as large as any feasible solution to the dual (maximization) problem.

Related lectures (32)

Convexifying Nonconvex Problems: MGT-418 LectureMGT-418: Convex optimization

Explores convexifying nonconvex problems through kernel tricks, sensitivity interpretation, and nonlinear dimensionality reduction.

Optimization with Constraints: KKT ConditionsMATH-212: Analyse numérique et optimisation

Covers the KKT conditions for optimization with constraints, essential for solving constrained optimization problems efficiently.

KKT and Convex OptimizationMATH-329: Continuous optimization

Covers the KKT conditions and convex optimization, discussing constraint qualifications and tangent cones of convex sets.

Convex Optimization: Nonlinear Dimensionality ReductionMGT-418: Convex optimization

Explores convex optimization in nonlinear dimensionality reduction, showcasing practical applications in signal processing and regression tasks.

The Hidden Convex Optimization Landscape of Deep Neural Networks

Explores the hidden convex optimization landscape of deep neural networks, showcasing the transition from non-convex to convex models.