This lecture explores the concept of solving linear systems iteratively, where a subroutine is repeatedly run to converge to a more accurate solution. The instructor discusses theoretical and practical criteria for comparing different solvers, emphasizing worst-case assumptions and measures of convergence. Various solvers such as Richardson iteration, Chebyshev, and CG are compared based on iteration count, convergence guarantees, and bit complexity. The lecture delves into the importance of adaptivity and cost in the Preconditioned Conjugate Gradient method, highlighting the nested nonlinear hierarchy of problems and the Stieltjes problem of moments. Additionally, the lecture touches on the interplay of infinite and finite dimensions, spectral information, and the impact of rounding errors in iterative solvers.