This lecture covers the principles of iterative descent methods in optimization, focusing on gradient descent for smooth convex and non-convex problems. It explains the role of computation in learning machines, maximum-likelihood estimators, unconstrained minimization, approximate vs. exact optimality, and the geometric interpretation of stationarity. The instructor discusses challenges faced by iterative optimization algorithms, convergence notions, local minima, and the transition from local to global optimality. The lecture also explores the effects of step sizes on convergence speed and the impact of discontinuities in practical optimization problems.