**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Nuisance parameter

Summary

In statistics, a nuisance parameter is any parameter which is unspecified but which must be accounted for in the hypothesis testing of the parameters which are of interest.
The classic example of a nuisance parameter comes from the normal distribution, a member of the location–scale family. For at least one normal distribution, the variance(s), σ2 is often not specified or known, but one desires to hypothesis test on the mean(s). Another example might be linear regression with unknown variance in the explanatory variable (the independent variable): its variance is a nuisance parameter that must be accounted for to derive an accurate interval estimate of the regression slope, calculate p-values, hypothesis test on the slope's value; see regression dilution.
Nuisance parameters are often scale parameters, but not always; for example in errors-in-variables models, the unknown true location of each observation is a nuisance parameter. A parameter may also cease to be a "nuisance" if it becomes the object of study, is estimated from data, or known.
The general treatment of nuisance parameters can be broadly similar between frequentist and Bayesian approaches to theoretical statistics. It relies on an attempt to partition the likelihood function into components representing information about the parameters of interest and information about the other (nuisance) parameters. This can involve ideas about sufficient statistics and ancillary statistics. When this partition can be achieved it may be possible to complete a Bayesian analysis for the parameters of interest by determining their joint posterior distribution algebraically. The partition allows frequentist theory to develop general estimation approaches in the presence of nuisance parameters. If the partition cannot be achieved it may still be possible to make use of an approximate partition.
In some special cases, it is possible to formulate methods that circumvent the presences of nuisance parameters.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

No results

Related people

No results

Related units

No results

Related concepts (10)

Bootstrapping (statistics)

Bootstrapping is any test or metric that uses random sampling with replacement (e.g. mimicking the sampling process), and falls under the broader class of resampling methods. Bootstrapping assigns measures of accuracy (bias, variance, confidence intervals, prediction error, etc.) to sample estimates. This technique allows estimation of the sampling distribution of almost any statistic using random sampling methods. Bootstrapping estimates the properties of an estimand (such as its variance) by measuring those properties when sampling from an approximating distribution.

Nuisance parameter

In statistics, a nuisance parameter is any parameter which is unspecified but which must be accounted for in the hypothesis testing of the parameters which are of interest. The classic example of a nuisance parameter comes from the normal distribution, a member of the location–scale family. For at least one normal distribution, the variance(s), σ2 is often not specified or known, but one desires to hypothesis test on the mean(s).

Linear regression

In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is called simple linear regression; for more than one, the process is called multiple linear regression. This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.

Related courses (1)

EE-607: Advanced Methods for Model Identification

This course introduces the principles of model identification for non-linear dynamic systems, and provides a set of possible solution methods that are thoroughly characterized in terms of modelling as

Related lectures (17)

Eliminating Nuisance Parameters: Lemmas in Statistical Inference

Explores the elimination of nuisance parameters in statistical models using Lemmas 14 and 15.

Genomic Data Analysis: Identifying Differentially Expressed Genes

Discusses methods for identifying differentially expressed genes in genomic data analysis.

Eliminating Nuisance Parameters: Statistical Inference

Covers the elimination of nuisance parameters in statistical inference using Lemmas 14 and 15.

Related MOOCs

No results