Skip to main content
Graph
Search
fr
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Feature Learning: Stability and Curse of Dimensionality
Graph Chatbot
Related lectures (31)
Previous
Page 2 of 4
Next
Learning Sparse Features: Overfitting in Neural Networks
Discusses how learning sparse features can lead to overfitting in neural networks despite empirical evidence of generalization.
Gradient-Based Algorithms in High-Dimensional Learning
Provides insights on gradient-based algorithms, deep learning mysteries, and the challenges of non-convex problems.
Nonlinear Supervised Learning
Explores the inductive bias of different nonlinear supervised learning methods and the challenges of hyper-parameter tuning.
Machine Learning Fundamentals
Introduces the basics of machine learning, covering supervised classification, logistic regression, and maximizing the margin.
Introduction to Machine Learning
Introduces machine learning concepts, from basics to advanced neural networks.
Deep Learning: No Free Lunch Theorem and Inductive Bias
Covers the No Free Lunch Theorem and the role of inductive bias in deep learning and reinforcement learning.
Reinforcement Learning Concepts
Covers key concepts in reinforcement learning, neural networks, clustering, and unsupervised learning, emphasizing their applications and challenges.
Neural Networks Optimization
Explores neural networks optimization, including backpropagation, batch normalization, weight initialization, and hyperparameter search strategies.
Neural Networks: Training and Activation
Explores neural networks, activation functions, backpropagation, and PyTorch implementation.
Deep Splines: Unifying Framework for Deep Neural Networks
Introduces a functional framework for deep neural networks with adaptive piecewise-linear splines, focusing on biomedical image reconstruction and the challenges of deep splines.