Lecture

Gradient Descent: Proximal Operator and Step-Size Strategies

Description

This lecture covers the concepts of proximal operator, gradient descent, and step-size strategies in the context of minimizing risk functions. It explains the derivation of the gradient-descent algorithm, the use of constant step-sizes, and the transition to iteration-dependent step-sizes. The lecture also discusses the convergence analysis under different step-size conditions, such as constant and vanishing step-sizes, and their impact on the convergence rate of the algorithm.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.