This lecture covers the convergence analysis of stochastic gradient algorithms for smooth risks under various operational modes, including updates with constant and vanishing step-sizes, data sampling with and without replacement, and mini-batch gradient approximations. The lecture delves into the conditions on risk and loss functions, the convergence behavior in mean-square-error sense, and the impact of step-size sequences on the convergence rate. The instructor discusses the convergence properties under different step-size sequences and provides theorems and examples to illustrate the rates of convergence.