Lecture

Understanding Machine Learning: Exactly Solvable Models

Description

This lecture by the instructor Lenka Zdeborová delves into the statistical mechanics of learning, exploring the mysteries behind neural networks. Topics covered include sample complexity, models in data science and physics, and the learning capabilities of neural networks. The lecture also discusses the teacher-student perceptron, the optimal storage capacity of networks, and the generalization error in neural networks. Through various models and algorithms, the lecture aims to shed light on the theoretical aspects of deep learning and the computational challenges in avoiding spurious minima.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related lectures (185)
Kernel Methods: Neural Networks
Covers the fundamentals of neural networks, focusing on RBF kernels and SVM.
Document Analysis: Topic Modeling
Explores document analysis, topic modeling, and generative models for data generation in machine learning.
Neural Networks: Training and Optimization
Explores neural network training, optimization, and environmental considerations, with insights into PCA and K-means clustering.
Deep Learning Fundamentals
Introduces deep learning, from logistic regression to neural networks, emphasizing the need for handling non-linearly separable data.
Neural Networks: Training and Activation
Explores neural networks, activation functions, backpropagation, and PyTorch implementation.
Show more