Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture covers the concept of Support Vector Machines, focusing on the invention by Vapnik and Cortes. It explains the training algorithm for optimal margin classifiers, extending the results to non-separable training data. The lecture discusses the high generalization ability of support-vector networks and compares their performance. It delves into the hard-SVM rule for finding the max-margin separating hyperplane and provides proofs for equivalent formulations. The lecture also introduces the soft-SVM rule as a relaxation for non-linearly separable data, detailing the introduction of slack variables. It explores losses for classification, including quadratic, logistic, and hinge losses, and their behavioral differences. The lecture concludes with optimization techniques for finding the optimal hyperplane using convex duality and risk minimization.
This video is available exclusively on Mediaspace for a restricted audience. Please log in to MediaSpace to access it if you have the necessary permissions.
Watch on Mediaspace