Lecture

Support Vector Machines: Optimal Margin Classifiers

Description

This lecture covers the concept of Support Vector Machines, focusing on the invention by Vapnik and Cortes. It explains the training algorithm for optimal margin classifiers, extending the results to non-separable training data. The lecture discusses the high generalization ability of support-vector networks and compares their performance. It delves into the hard-SVM rule for finding the max-margin separating hyperplane and provides proofs for equivalent formulations. The lecture also introduces the soft-SVM rule as a relaxation for non-linearly separable data, detailing the introduction of slack variables. It explores losses for classification, including quadratic, logistic, and hinge losses, and their behavioral differences. The lecture concludes with optimization techniques for finding the optimal hyperplane using convex duality and risk minimization.

This video is available exclusively on Mediaspace for a restricted audience. Please log in to MediaSpace to access it if you have the necessary permissions.

Watch on Mediaspace
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.