In statistics, Deming regression, named after W. Edwards Deming, is an errors-in-variables model which tries to find the line of best fit for a two-dimensional dataset. It differs from the simple linear regression in that it accounts for errors in observations on both the x- and the y- axis. It is a special case of total least squares, which allows for any number of predictors and a more complicated error structure.
Deming regression is equivalent to the maximum likelihood estimation of an errors-in-variables model in which the errors for the two variables are assumed to be independent and normally distributed, and the ratio of their variances, denoted δ, is known. In practice, this ratio might be estimated from related data-sources; however the regression procedure takes no account for possible errors in estimating this ratio.
The Deming regression is only slightly more difficult to compute than the simple linear regression. Most statistical software packages used in clinical chemistry offer Deming regression.
The model was originally introduced by who considered the case δ = 1, and then more generally by with arbitrary δ. However their ideas remained largely unnoticed for more than 50 years, until they were revived by and later propagated even more by . The latter book became so popular in clinical chemistry and related fields that the method was even dubbed Deming regression in those fields.
Assume that the available data (yi, xi) are measured observations of the "true" values (yi*, xi*), which lie on the regression line:
where errors ε and η are independent and the ratio of their variances is assumed to be known:
In practice, the variances of the and parameters are often unknown, which complicates the estimate of . Note that when the measurement method for and is the same, these variances are likely to be equal, so for this case.
We seek to find the line of "best fit"
such that the weighted sum of squared residuals of the model is minimized:
See for a full derivation.
The solution can be expressed in terms of the second-degree sample moments.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Machine learning and data analysis are becoming increasingly central in sciences including physics. In this course, fundamental principles and methods of machine learning will be introduced and practi
This course aims to introduce the basic principles of machine learning in the context of the digital humanities. We will cover both supervised and unsupervised learning techniques, and study and imple
We consider the problem of defining and fitting models of autoregressive time series of probability distributions on a compact interval of Double-struck capital R. An order-1 autoregressive model in this context is to be understood as a Markov chain, where ...
We describe a novel method to compute the components of dynamo tensors from direct magnetohydrodynamic (MHD) simulations. Our method relies upon an extension and generalization of the standard H & ouml;gbom CLEAN algorithm widely used in radio astronomy to ...
Oxford Univ Press2024
, ,
We consider the problem of learning a target function corresponding to a deep, extensive-width, non-linear neural network with random Gaussian weights. We consider the asymptotic limit where the number of samples, the input dimension and the network width ...
In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is called simple linear regression; for more than one, the process is called multiple linear regression. This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.
Covers the fundamentals of multilayer neural networks and deep learning, including back-propagation and network architectures like LeNet, AlexNet, and VGG-16.