Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture covers the concepts of Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) for dimensionality reduction. Starting with the intuition behind PCA, it explains how PCA aims to retain important signal while removing noise. The lecture then delves into the mathematical details of PCA, including variance maximization and eigenvector problems. Moving on to LDA, it discusses the goal of clustering samples from the same class and separating different classes. The lecture concludes with a comparison of Kernel PCA and traditional PCA, highlighting the benefits of Kernel PCA for nonlinear data. Additionally, it introduces t-distributed stochastic neighbor embedding (t-SNE) as a popular technique for visualizing high-dimensional data.