Reduced chi-squared statisticIn statistics, the reduced chi-square statistic is used extensively in goodness of fit testing. It is also known as mean squared weighted deviation (MSWD) in isotopic dating and variance of unit weight in the context of weighted least squares. Its square root is called regression standard error, standard error of the regression, or standard error of the equation (see ) It is defined as chi-square per degree of freedom: where the chi-squared is a weighted sum of squared deviations: with inputs: variance , observations O, and calculated data C.
Regret (decision theory)In decision theory, on making decisions under uncertainty—should information about the best course of action arrive after taking a fixed decision—the human emotional response of regret is often experienced, and can be measured as the value of difference between a made decision and the optimal decision. The theory of regret aversion or anticipated regret proposes that when facing a decision, individuals might anticipate regret and thus incorporate in their choice their desire to eliminate or reduce this possibility.
Least squaresThe method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems (sets of equations in which there are more equations than unknowns) by minimizing the sum of the squares of the residuals (a residual being the difference between an observed value and the fitted value provided by a model) made in the results of each individual equation. The most important application is in data fitting.
Deep learningDeep learning is part of a broader family of machine learning methods, which is based on artificial neural networks with representation learning. The adjective "deep" in deep learning refers to the use of multiple layers in the network. Methods used can be either supervised, semi-supervised or unsupervised.
Root-mean-square deviationThe root-mean-square deviation (RMSD) or root-mean-square error (RMSE) is a frequently used measure of the differences between values (sample or population values) predicted by a model or an estimator and the values observed. The RMSD represents the square root of the second sample moment of the differences between predicted values and observed values or the quadratic mean of these differences. These deviations are called residuals when the calculations are performed over the data sample that was used for estimation and are called errors (or prediction errors) when computed out-of-sample.
Square-integrable functionIn mathematics, a square-integrable function, also called a quadratically integrable function or function or square-summable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite. Thus, square-integrability on the real line is defined as follows. One may also speak of quadratic integrability over bounded intervals such as for . An equivalent definition is to say that the square of the function itself (rather than of its absolute value) is Lebesgue integrable.
Linear regressionIn statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is called simple linear regression; for more than one, the process is called multiple linear regression. This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.
Schwartz spaceIn mathematics, Schwartz space is the function space of all functions whose derivatives are rapidly decreasing. This space has the important property that the Fourier transform is an automorphism on this space. This property enables one, by duality, to define the Fourier transform for elements in the dual space of , that is, for tempered distributions. A function in the Schwartz space is sometimes called a Schwartz function. Schwartz space is named after French mathematician Laurent Schwartz.
Norm (mathematics)In mathematics, a norm is a function from a real or complex vector space to the non-negative real numbers that behaves in certain ways like the distance from the origin: it commutes with scaling, obeys a form of the triangle inequality, and is zero only at the origin. In particular, the Euclidean distance in a Euclidean space is defined by a norm on the associated Euclidean vector space, called the Euclidean norm, the 2-norm, or, sometimes, the magnitude of the vector.
Heteroskedasticity-consistent standard errorsThe topic of heteroskedasticity-consistent (HC) standard errors arises in statistics and econometrics in the context of linear regression and time series analysis. These are also known as heteroskedasticity-robust standard errors (or simply robust standard errors), Eicker–Huber–White standard errors (also Huber–White standard errors or White standard errors), to recognize the contributions of Friedhelm Eicker, Peter J. Huber, and Halbert White.