Lecture

Computing the Newton Step: From GD to CG

Description

This lecture covers the transition from Gradient Descent (GD) to Conjugate Gradients (CG) for optimization on manifolds. Starting with the initialization and iterative steps of GD, it then introduces CG as a more efficient optimization method. The lecture explains how GD iterates are linear combinations of residues and how CG iterates are computed. Despite CG not finding the exact solution due to losing H-orthogonality, it is significantly better than GD, offering linear convergence with a faster rate. The lecture concludes by highlighting the importance of CG in achieving machine precision in optimization problems.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.