Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture covers the transition from Gradient Descent (GD) to Conjugate Gradients (CG) for optimization on manifolds. Starting with the initialization and iterative steps of GD, it then introduces CG as a more efficient optimization method. The lecture explains how GD iterates are linear combinations of residues and how CG iterates are computed. Despite CG not finding the exact solution due to losing H-orthogonality, it is significantly better than GD, offering linear convergence with a faster rate. The lecture concludes by highlighting the importance of CG in achieving machine precision in optimization problems.