Skip to main content
Graph
Search
fr
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Why are there so many saddle points?: Loss landscape and optimization methods
Graph Chatbot
Related lectures (30)
Previous
Page 3 of 3
Next
Learning with Deep Neural Networks
Explores the success and challenges of deep learning, including overfitting, generalization, and the impact on various domains.
Deep Learning: Convolutional Neural Networks
Introduces Convolutional Neural Networks, explaining their architecture, training process, and applications in semantic segmentation tasks.
Neural Networks: Two Layers Neural Network
Covers the basics of neural networks, focusing on the development from two layers neural networks to deep neural networks.
Deep Learning: Theory and Applications
Explores the mathematics of deep learning, neural networks, and their applications in computer vision tasks, addressing challenges and the need for robustness.
Neural Networks: Logic and Applications
Explores the logic of neuronal function, the Perceptron model, deep learning applications, and levels of abstraction in neural models.
Structures in Non-Convex Optimization
Explores non-convex optimization in deep learning, covering critical points, SGD convergence, saddle points, and adaptive gradient methods.
Reinforcement Learning: Policy Gradient and Actor-Critic Methods
Provides an overview of reinforcement learning, focusing on policy gradient and actor-critic methods for deep artificial neural networks.
Machine Learning for Solving PDEs: Random Feature Method
Explores the Random Feature Method for solving PDEs using machine learning algorithms to approximate high-dimensional functions efficiently.
Curse of Dimensionality in Deep Learning
Delves into the challenges of deep learning, exploring dimensionality, performance, and overfitting phenomena in neural networks.
Deep Learning III
Delves into deep learning optimization, challenges, SGD variants, critical points, overparametrized networks, and adaptive methods.