Lecture

Primal-dual Optimization III: Lagrangian Gradient Methods

In course
DEMO: id exercitation id laboris
Cupidatat consequat dolore sunt ea ea et id ut commodo. Esse id laborum Lorem magna ut culpa anim consequat reprehenderit cillum fugiat aliqua. Do qui exercitation elit enim culpa reprehenderit non id nisi tempor mollit ut eu. Consequat amet ipsum laboris cillum est minim ullamco in. Cupidatat do veniam officia ex.
Login to see this section
Description

This lecture covers primal-dual optimization methods, focusing on Lagrangian gradient techniques. It delves into the mathematics behind data optimization, including convex formulations, e-accurate solutions, and various primal-dual methods. The instructor explains the quadratic penalty and Lagrangian formulations, augmented dual problems, and the linearized augmented Lagrangian method. Examples such as blind image deconvolution, basis pursuit, and neural networks are used to illustrate the concepts. The lecture concludes with discussions on convergence guarantees, the augmented Lagrangian CGM, and applications like k-means clustering and scalable semidefinite programming.

Instructor
in laboris exercitation
Exercitation id velit Lorem aliquip reprehenderit est excepteur anim labore et minim ea. Excepteur tempor cupidatat ut cupidatat esse voluptate minim. Reprehenderit pariatur dolore laborum deserunt adipisicing est id proident laborum ut.
Login to see this section
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related lectures (45)
Optimality of Convergence Rates: Accelerated/Stochastic Gradient Descent
Covers the optimality of convergence rates in accelerated and stochastic gradient descent methods for non-convex optimization problems.
Optimization Methods: Convergence and Trade-offs
Covers optimization methods, convergence guarantees, trade-offs, and variance reduction techniques in numerical optimization.
Primal-dual Optimization: Fundamentals
Explores primal-dual optimization, minimax problems, and gradient descent-ascent methods for optimization algorithms.
Primal-dual Optimization: Extra-Gradient Method
Explores the Extra-Gradient method for Primal-dual optimization, covering nonconvex-concave problems, convergence rates, and practical performance.
Optimization Programs: Piecewise Linear Cost Functions
Covers the formulation of optimization programs for minimizing piecewise linear cost functions.
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.