**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.

Concept# Goodness of fit

Summary

The goodness of fit of a statistical model describes how well it fits a set of observations. Measures of goodness of fit typically summarize the discrepancy between observed values and the values expected under the model in question. Such measures can be used in statistical hypothesis testing, e.g. to test for normality of residuals, to test whether two samples are drawn from identical distributions (see Kolmogorov–Smirnov test), or whether outcome frequencies follow a specified distribution (see Pearson's chi-square test). In the analysis of variance, one of the components into which the variance is partitioned may be a lack-of-fit sum of squares.
In assessing whether a given distribution is suited to a data-set, the following tests and their underlying measures of fit can be used:
Bayesian information criterion
Kolmogorov–Smirnov test
Cramér–von Mises criterion
Anderson–Darling test
Berk-Jones tests
Shapiro–Wilk test
Chi-squared test
Akaike information criterion
Hosmer–Lemeshow test
Kuiper's test
Kernelized Stein discrepancy
Zhang's ZK, ZC and ZA tests
Moran test
Density Based Empirical Likelihood Ratio tests
In regression analysis, more specifically regression validation, the following topics relate to goodness of fit:
Coefficient of determination (the R-squared measure of goodness of fit);
Lack-of-fit sum of squares;
Mallows's Cp criterion
Prediction error
Reduced chi-square
The following are examples that arise in the context of categorical data.
Pearson's chi-square test uses a measure of goodness of fit which is the sum of differences between observed and expected outcome frequencies (that is, counts of observations), each squared and divided by the expectation:
where:
Oi = an observed count for bin i
Ei = an expected count for bin i, asserted by the null hypothesis.
The expected frequency is calculated by:
where:
F = the cumulative distribution function for the probability distribution being tested.
Yu = the upper limit for class i,
Yl = the lower limit for class i, and
N = the sample size
The resulting value can be compared with a chi-square distribution to determine the goodness of fit.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications (38)

Related people (2)

Related concepts (19)

Related courses (18)

Related lectures (39)

BIO-320: Morphology I

Ce cours est une préparation intensive à l'examen d'entrée en 3ème année de Médecine. Les matières enseignées sont la morphologie macroscopique (anatomie) , microscopique (histologie) de la tête, du c

MATH-408: Regression methods

General graduate course on regression methods

F-test

An F-test is any statistical test in which the test statistic has an F-distribution under the null hypothesis. It is most often used when comparing statistical models that have been fitted to a data set, in order to identify the model that best fits the population from which the data were sampled. Exact "F-tests" mainly arise when the models have been fitted to the data using least squares. The name was coined by George W. Snedecor, in honour of Ronald Fisher. Fisher initially developed the statistic as the variance ratio in the 1920s.

Model selection

Model selection is the task of selecting a model from among various candidates on the basis of performance criterion to choose the best one. In the context of learning, this may be the selection of a statistical model from a set of candidate models, given data. In the simplest cases, a pre-existing set of data is considered. However, the task can also involve the design of experiments such that the data collected is well-suited to the problem of model selection.

G-test

In statistics, G-tests are likelihood-ratio or maximum likelihood statistical significance tests that are increasingly being used in situations where chi-squared tests were previously recommended. The general formula for G is where is the observed count in a cell, is the expected count under the null hypothesis, denotes the natural logarithm, and the sum is taken over all non-empty cells. Furthermore, the total observed count should be equal to the total expected count:where is the total number of observations.

Data Interpolation and Curve Fitting

Explores data interpolation and curve fitting techniques using MATLAB for analyzing and visualizing experimental data.

Multilinear Regression: Least Square Fit

Covers multilinear regression using least square fit method and the importance of standardizing variables.

Interpolation and Curve Fitting

Explores interpolation and curve fitting techniques using MATLAB for analyzing experimental data and smoothing curves.

,

A probabilistic model for estimating the fatigue life of composite laminates based on the mean value and standard deviation of the fatigue life is introduced here for predicting the distribution of fatigue life at any stress level for a constant stress rat ...

Many methods exist to model snow densification in order to calculate the depth of a single snow layer or the depth of the total snow cover from its mass. Most of these densification models need to be tightly integrated with an accumulation and melt model a ...

Michel Bierlaire, Thomas Gasos, Prateek Bansal

Outliers in discrete choice response data may result from misclassification and misreporting of the response variable and from choice behaviour that is inconsistent with modelling assumptions (e.g. random utility maximisation). In the presence of outliers, ...

2023