Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture delves into loss functions used to measure the quality of machine learning models, focusing on regression and linear regression. The instructor explains the importance of loss functions in quantifying model fit to data, using examples like squared loss and mean absolute error. The concept of convexity in loss functions is introduced, along with the application of gradient descent for optimization. Through a simple one-parameter model, the lecture illustrates how gradient descent iteratively updates model parameters to minimize the loss function. The impact of step size in gradient descent is discussed, showcasing scenarios where the step size can lead to convergence, slow progress, or divergence. The lecture emphasizes the delicate balance in choosing an appropriate step size for efficient optimization.