Lecture

Support Vector Machines: Theory and Applications

Description

This lecture covers the theory and applications of Support Vector Machines (SVMs) in machine learning. It explains Vapnik's invention of a training algorithm for optimal margin classifiers, the concept of support-vector networks, linear classifiers, hard-SVM rules for max-margin separating hyperplanes, soft SVMs for non-linearly separable data, classification by risk minimization, convex relaxation of classification risk, losses for classification, and the application of convex duality to SVMs. The lecture also delves into optimization techniques for finding the optimal hyperplane, convex duality in SVMs, and the interpretation of the dual formulation in SVMs.

This video is available exclusively on Mediaspace for a restricted audience. Please log in to MediaSpace to access it if you have the necessary permissions.

Watch on Mediaspace
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.