Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture covers the k-Nearest Neighbors method, which classifies data based on similar samples, and introduces feature expansion to handle nonlinear data by transforming inputs. It explains the properties, advantages, and drawbacks of k-Nearest Neighbors, including the curse of dimensionality and the need for a good data representation. The lecture also discusses polynomial curve fitting, gradient computation, and the use of expanded features in training. It concludes with a demonstration of polynomial feature expansion using different functions and the impact on classification accuracy.