This lecture discusses the transition from linear to nonlinear machine learning models, focusing on the k-Nearest Neighbors (k-NN) algorithm and feature expansion techniques. The instructor begins with a recap of linear models, explaining their structure and the algorithms that utilize them, such as least squares, logistic regression, and support vector machines. The lecture then introduces the k-NN algorithm, emphasizing its intuitive approach of classifying data based on the proximity of training samples. The instructor illustrates how k-NN can handle nonlinear data and discusses its properties, including sensitivity to outliers and the importance of the distance function used. The concept of feature expansion is also explored, demonstrating how transforming input data into higher dimensions can make it linearly separable. The lecture concludes with a discussion on the challenges of choosing the right polynomial degree for feature expansion and introduces the idea of kernel methods as a solution to these challenges.