This lecture discusses the concept of data dredging in machine learning, where one iterates between training, validation, and test sets to achieve a good model, highlighting the risks of overfitting and lack of generalizability.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Covers overfitting, regularization, and cross-validation in machine learning, exploring polynomial curve fitting, feature expansion, kernel functions, and model selection.