In statistics, quasi-likelihood methods are used to estimate parameters in a statistical model when exact likelihood methods, for example maximum likelihood estimation, are computationally infeasible. Due to the wrong likelihood being used, quasi-likelihood estimators lose asymptotic efficiency compared to, e.g., maximum likelihood estimators. Under broadly applicable conditions, quasi-likelihood estimators are consistent and asymptotically normal. The asymptotic covariance matrix can be obtained using the so-called sandwich estimator. Examples of quasi-likelihood methods are the generalized estimating equations and pairwise likelihood approaches. The term quasi-likelihood function was introduced by Robert Wedderburn in 1974 to describe a function that has similar properties to the log-likelihood function but is not the log-likelihood corresponding to any actual probability distribution. He proposed to fit certain quasi-likelihood models using a straightforward extension of the algorithms used to fit generalized linear models. Quasi-likelihood estimation is one way of allowing for overdispersion, that is, greater variability in the data than would be expected from the statistical model used. It is most often used with models for count data or grouped binary data, i.e. data that would otherwise be modelled using the Poisson or binomial distribution. Instead of specifying a probability distribution for the data, only a relationship between the mean and the variance is specified in the form of a variance function giving the variance as a function of the mean. Generally, this function is allowed to include a multiplicative factor known as the overdispersion parameter or scale parameter that is estimated from the data. Most commonly, the variance function is of a form such that fixing the overdispersion parameter at unity results in the variance-mean relationship of an actual probability distribution such as the binomial or Poisson. (For formulae, see the binomial data example and count data example under generalized linear models.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (3)
MATH-408: Regression methods
General graduate course on regression methods
MATH-562: Statistical inference
Inference from the particular to the general based on probability models is central to the statistical method. This course gives a graduate-level account of the main ideas of statistical inference.
CIVIL-606: Inference for large-scale time series with application to sensor fusion
Large-scale time series analysis is performed by a new statistical tool that is superior to other estimators of complex state-space models. The identified stochastic dependences can be used for sensor
Related lectures (31)
Inference: Model Checking
Covers iterative weighted least squares, generalized linear models, and model checking.
Modern Regression: Spring Barley Data
Covers inference, weighted least squares, spring barley data analysis, and smoothing techniques.
Robust Regression in Genomic Data Analysis
Explores robust regression in genomic data analysis, focusing on downweighting large residuals for improved estimation accuracy and quality assessment metrics like NUSE and RLE.
Show more
Related publications (30)
Related concepts (2)
Count data
In statistics, count data is a statistical data type describing countable quantities, data which can take only the counting numbers, non-negative integer values {0, 1, 2, 3, ...}, and where these integers arise from counting rather than ranking. The statistical treatment of count data is distinct from that of binary data, in which the observations can take only two values, usually represented by 0 and 1, and from ordinal data, which may also consist of integers but where the individual values fall on an arbitrary scale and only the relative ranking is important.
Generalized linear model
In statistics, a generalized linear model (GLM) is a flexible generalization of ordinary linear regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value. Generalized linear models were formulated by John Nelder and Robert Wedderburn as a way of unifying various other statistical models, including linear regression, logistic regression and Poisson regression.

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.