Lecture

Optimization: Gradient Descent

Description

This lecture covers the concept of optimization in machine learning, focusing on the Gradient Descent algorithm. It explains how to minimize a cost function by iteratively updating the parameters in the opposite direction of the gradient. The lecture discusses the challenges of grid search for large parameter spaces and introduces the concept of gradient descent for finding the optimal parameters. It also delves into the theoretical motivation behind the algorithm, the use of stochastic gradient descent, and the role of subgradients in non-smooth optimization.

This video is available exclusively on Mediaspace for a restricted audience. Please log in to MediaSpace to access it if you have the necessary permissions.

Watch on Mediaspace
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.