Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture introduces an alternate formulation of logistic regression for {-1,1} labels, showing a simpler expression for the conditional probability. By reinterpreting logistic regression as empirical risk minimization with a new loss function, the instructor explains the connection with the maximum likelihood principle and the concept of empirical risk. The lecture compares the new loss function with others encountered, emphasizing the convexity and advantages of the logistic loss. Through graphical representations and toy data examples, the instructor illustrates how minimizing the logistic loss can lead to predictors with lower empirical risk for the zero-one loss.