In statistics, quasi-likelihood methods are used to estimate parameters in a statistical model when exact likelihood methods, for example maximum likelihood estimation, are computationally infeasible. Due to the wrong likelihood being used, quasi-likelihood estimators lose asymptotic efficiency compared to, e.g., maximum likelihood estimators. Under broadly applicable conditions, quasi-likelihood estimators are consistent and asymptotically normal. The asymptotic covariance matrix can be obtained using the so-called sandwich estimator. Examples of quasi-likelihood methods are the generalized estimating equations and pairwise likelihood approaches.
The term quasi-likelihood function was introduced by Robert Wedderburn in 1974 to describe a function that has similar properties to the log-likelihood function but is not the log-likelihood corresponding to any actual probability distribution. He proposed to fit certain quasi-likelihood models using a straightforward extension of the algorithms used to fit generalized linear models.
Quasi-likelihood estimation is one way of allowing for overdispersion, that is, greater variability in the data than would be expected from the statistical model used. It is most often used with models for count data or grouped binary data, i.e. data that would otherwise be modelled using the Poisson or binomial distribution.
Instead of specifying a probability distribution for the data, only a relationship between the mean and the variance is specified in the form of a variance function giving the variance as a function of the mean. Generally, this function is allowed to include a multiplicative factor known as the overdispersion parameter or scale parameter that is estimated from the data. Most commonly, the variance function is of a form such that fixing the overdispersion parameter at unity results in the variance-mean relationship of an actual probability distribution such as the binomial or Poisson. (For formulae, see the binomial data example and count data example under generalized linear models.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Inference from the particular to the general based on probability models is central to the statistical method. This course gives a graduate-level account of the main ideas of statistical inference.
Large-scale time series analysis is performed by a new statistical tool that is superior to other estimators of complex state-space models. The identified stochastic dependences can be used for sensor
In statistics, count data is a statistical data type describing countable quantities, data which can take only the counting numbers, non-negative integer values {0, 1, 2, 3, ...}, and where these integers arise from counting rather than ranking. The statistical treatment of count data is distinct from that of binary data, in which the observations can take only two values, usually represented by 0 and 1, and from ordinal data, which may also consist of integers but where the individual values fall on an arbitrary scale and only the relative ranking is important.
In statistics, a generalized linear model (GLM) is a flexible generalization of ordinary linear regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value. Generalized linear models were formulated by John Nelder and Robert Wedderburn as a way of unifying various other statistical models, including linear regression, logistic regression and Poisson regression.
Explores Generalized Linear Models for non-Gaussian data, covering interpretation of natural link function, MLE asymptotic normality, deviance measures, residuals, and logistic regression.
Covers the basics of linear regression, including OLS, heteroskedasticity, autocorrelation, instrumental variables, Maximum Likelihood Estimation, time series analysis, and practical advice.
Extracting maximal information from experimental data requires access to the likelihood function, which however is never directly available for complex experiments like those performed at high energy colliders. Theoretical predictions are obtained in this ...
Springer2024
, , ,
Fitting network models to neural activity is an important tool in neuroscience. A popular approach is to model a brain area with a probabilistic recurrent spiking network whose parameters maximize the likelihood of the recorded activity. Although this is w ...
2021
, ,
This work is dedicated to the systematic investigation of wind turbine wakes under the effect of pressure gradients. Wind tunnel experiments are carried out with a wind turbine positioned on straight ramps of increasing angle such that it experiences an ap ...