**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Lecture# Pitfalls in Empirical NLP Research

Description

This lecture delves into the common pitfalls encountered in empirical NLP research, focusing on the evaluation of models and the importance of statistical testing. The instructor discusses the exclusive use of the BLEU metric, the impact of hyperparameter tuning, and the significance of statistical power in detecting true differences. Through the analysis of various papers, the lecture emphasizes the need for standardized metrics, reproducibility, and a community effort to improve research quality in the NLP field.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

In course

Instructor

EE-608: Deep Learning For Natural Language Processing

The Deep Learning for NLP course provides an overview of neural network based methods applied to text. The focus is on models particularly suited to the properties of human language, such as categori

Related concepts (12)

Related lectures (22)

Statistical hypothesis testing

A statistical hypothesis test is a method of statistical inference used to decide whether the data at hand sufficiently support a particular hypothesis. Hypothesis testing allows us to make probabilistic statements about population parameters. While hypothesis testing was popularized early in the 20th century, early forms were used in the 1700s. The first use is credited to John Arbuthnot (1710), followed by Pierre-Simon Laplace (1770s), in analyzing the human sex ratio at birth; see .

Statistical significance

In statistical hypothesis testing, a result has statistical significance when a result at least as "extreme" would be very infrequent if the null hypothesis were true. More precisely, a study's defined significance level, denoted by , is the probability of the study rejecting the null hypothesis, given that the null hypothesis is true; and the p-value of a result, , is the probability of obtaining a result at least as extreme, given that the null hypothesis is true. The result is statistically significant, by the standards of the study, when .

Test statistic

A test statistic is a statistic (a quantity derived from the sample) used in statistical hypothesis testing. A hypothesis test is typically specified in terms of a test statistic, considered as a numerical summary of a data-set that reduces the data to one value that can be used to perform the hypothesis test. In general, a test statistic is selected or defined in such a way as to quantify, within observed data, behaviours that would distinguish the null from the alternative hypothesis, where such an alternative is prescribed, or that would characterize the null hypothesis if there is no explicitly stated alternative hypothesis.

Likelihood-ratio test

In statistics, the likelihood-ratio test assesses the goodness of fit of two competing statistical models, specifically one found by maximization over the entire parameter space and another found after imposing some constraint, based on the ratio of their likelihoods. If the constraint (i.e., the null hypothesis) is supported by the observed data, the two likelihoods should not differ by more than sampling error. Thus the likelihood-ratio test tests whether this ratio is significantly different from one, or equivalently whether its natural logarithm is significantly different from zero.

Power of a test

In statistics, the power of a binary hypothesis test is the probability that the test correctly rejects the null hypothesis () when a specific alternative hypothesis () is true. It is commonly denoted by , and represents the chances of a true positive detection conditional on the actual existence of an effect to detect. Statistical power ranges from 0 to 1, and as the power of a test increases, the probability of making a type II error by wrongly failing to reject the null hypothesis decreases.

Property Testing: Uniform DistributionCOM-406: Foundations of Data Science

Covers property testing for uniform distributions and the importance of reliable predictions.

Neuronal Morphologies: Global Features and MeasurementsMOOC: Simulation Neurocience

Covers neuronal morphologies, visualisation, measurements, and statistical tests.

Building a Decision TreeMGT-448: Statistical inference and machine learning

Covers building decision trees to classify mushrooms as poisonous or not.

Chi-Square Test: Independence HypothesisMATH-131: Probability and statistics

Explains the Chi-Square test for independence hypothesis and its practical applications.

Headline Analysis: Language Impact & SuccessCS-401: Applied data analysis

Explores the influence of language on headline success through real-world data analysis and statistical testing.