This lecture explores the mathematical aspects of neural network approximation and learning, focusing on deep learning experimental revolution, supervised learning basic setup, challenges of high-dimensional learning, reproducing kernel Hilbert spaces, variation-norm spaces, dynamic CLT for shallow neural networks, and upper bounds for deep ReLU networks. The instructor discusses the mean-field limit, global convergence, continuity equation, depth separation, lower bounds for piece-wise oscillatory functions, and future prospects for approximation vs optimization in deep models.
This video is available exclusively on Mediaspace for a restricted audience. Please log in to MediaSpace to access it if you have the necessary permissions.
Watch on Mediaspace