Lecture

Primal-dual Optimization III: Lagrangian Gradient Methods

In course
DEMO: commodo incididunt
Cillum ullamco duis laborum laboris. Sit minim Lorem ut in irure duis amet ad in eu. Amet cupidatat elit est culpa incididunt ad id quis. Sint sit ea consequat pariatur veniam laboris et. Id Lorem magna elit tempor esse ipsum ea id reprehenderit eu ex. Cupidatat cupidatat velit sint nostrud sit consectetur est proident minim tempor. Veniam adipisicing anim elit Lorem deserunt adipisicing elit tempor ea occaecat ut.
Login to see this section
Description

This lecture covers primal-dual optimization methods, focusing on Lagrangian gradient techniques. It delves into the mathematics behind data optimization, including convex formulations, e-accurate solutions, and various primal-dual methods. The instructor explains the quadratic penalty and Lagrangian formulations, augmented dual problems, and the linearized augmented Lagrangian method. Examples such as blind image deconvolution, basis pursuit, and neural networks are used to illustrate the concepts. The lecture concludes with discussions on convergence guarantees, the augmented Lagrangian CGM, and applications like k-means clustering and scalable semidefinite programming.

Instructor
esse ipsum sunt
Do aliquip est in duis. Veniam laborum et est id exercitation et ex enim aliqua sit exercitation. Ex culpa mollit in do commodo. Qui ea mollit do eu.
Login to see this section
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related lectures (45)
Optimality of Convergence Rates: Accelerated/Stochastic Gradient Descent
Covers the optimality of convergence rates in accelerated and stochastic gradient descent methods for non-convex optimization problems.
Optimization Methods: Convergence and Trade-offs
Covers optimization methods, convergence guarantees, trade-offs, and variance reduction techniques in numerical optimization.
Primal-dual Optimization: Fundamentals
Explores primal-dual optimization, minimax problems, and gradient descent-ascent methods for optimization algorithms.
Primal-dual Optimization: Extra-Gradient Method
Explores the Extra-Gradient method for Primal-dual optimization, covering nonconvex-concave problems, convergence rates, and practical performance.
Optimization Programs: Piecewise Linear Cost Functions
Covers the formulation of optimization programs for minimizing piecewise linear cost functions.
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.