Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture covers the concepts of proximal operator, gradient descent, and step-size strategies in the context of minimizing risk functions. It explains the derivation of the gradient-descent algorithm, the use of constant step-sizes, and the transition to iteration-dependent step-sizes. The lecture also discusses the convergence analysis under different step-size conditions, such as constant and vanishing step-sizes, and their impact on the convergence rate of the algorithm.