Lecture

Pitfalls in Empirical NLP Research

Description

This lecture delves into the common pitfalls encountered in empirical NLP research, focusing on the evaluation of models and the importance of statistical testing. The instructor discusses the exclusive use of the BLEU metric, the impact of hyperparameter tuning, and the significance of statistical power in detecting true differences. Through the analysis of various papers, the lecture emphasizes the need for standardized metrics, reproducibility, and a community effort to improve research quality in the NLP field.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.