Skip to main content
Graph
Search
fr
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Johnson-Lindenstrauss Theorem
Graph Chatbot
Related lectures (31)
Previous
Page 3 of 4
Next
Nonlinear Dimension Reduction Methods
Explores nonlinear dimension reduction methods preserving neighborhoods in lower-dimensional spaces.
Unsupervised Learning: Dimensionality Reduction
Explores unsupervised learning techniques for reducing dimensions in data, emphasizing PCA, LDA, and Kernel PCA.
Unsupervised learning: Young-Eckart-Mirsky theorem and intro to PCA
Introduces the Young-Eckart-Mirsky theorem and PCA for unsupervised learning and data visualization.
PCA: Key Concepts
Covers the key concepts of PCA, including reducing data dimensionality and extracting features, with practical exercises.
Gaussian Naive Bayes & K-NN
Covers Gaussian Naive Bayes, K-nearest neighbors, and hyperparameter tuning in machine learning.
Monte Carlo: Markov Chains
Covers unsupervised learning, dimensionality reduction, SVD, low-rank estimation, PCA, and Monte Carlo Markov Chains.
Maximum Entropy Modeling: Applications & Inference
Explores maximum entropy modeling applications in neuroscience and protein sequence data.
PCA: Key Concepts
Covers the key concepts of Principal Component Analysis (PCA) and its practical applications in data dimensionality reduction and feature extraction.
Unsupervised Learning: Dimensionality Reduction and Clustering
Covers unsupervised learning, focusing on dimensionality reduction and clustering, explaining how it helps find patterns in data without labels.
Support Vector Machines: Non-Linear Data Mapping
Explores mapping non-linear data to higher dimensions using SVM and covers polynomial feature expansion, regularization, noise implications, and curve-fitting methods.