Generalized linear mixed modelIn statistics, a generalized linear mixed model (GLMM) is an extension to the generalized linear model (GLM) in which the linear predictor contains random effects in addition to the usual fixed effects. They also inherit from GLMs the idea of extending linear mixed models to non-normal data. GLMMs provide a broad range of models for the analysis of grouped data, since the differences between groups can be modelled as a random effect. These models are useful in the analysis of many kinds of data, including longitudinal data.
Modèle log-linéaireA log-linear model is a mathematical model that takes the form of a function whose logarithm equals a linear combination of the parameters of the model, which makes it possible to apply (possibly multivariate) linear regression. That is, it has the general form in which the fi(X) are quantities that are functions of the variable X, in general a vector of values, while c and the wi stand for the model parameters. The term may specifically be used for: A log-linear plot or graph, which is a type of semi-log plot.
Prévision de la demandeLa prévision de la demande (à différencier de la prévision des ventes qui intègre les contraintes de production) est une démarche qui consiste à estimer la consommation des produits ou des services pour les périodes à venir. Elle permettra de planifier la production afin de réduire les délais de livraison et d'optimiser le niveau des stocks. La prévision de la demande est aussi une étape fondamentale de l'établissement d'un S&OP (plan industriel et commercial) ou d'un plan d'affaires ("business model") pour étudier la viabilité économique d'un projet ou d'une entreprise.
Lissage exponentielLe lissage exponentiel est une méthode empirique de lissage et de prévision de données chronologiques affectées d'aléas. Comme dans la méthode des moyennes mobiles, chaque donnée est lissée successivement en partant de la valeur initiale. Le lissage exponentiel donne aux observations passées un poids décroissant exponentiellement avec leur ancienneté. Le lissage exponentiel est une des méthodes de fenêtrage utilisées en traitement du signal. Elle agit comme un filtre passe-bas en supprimant les fréquences élevées du signal initial.
Qualitative propertyQualitative properties are properties that are observed and can generally not be measured with a numerical result. They are contrasted to quantitative properties which have numerical characteristics. Some engineering and scientific properties are qualitative. A test method can result in qualitative data about something. This can be a categorical result or a binary classification (e.g., pass/fail, go/no go, conform/non-conform). It can sometimes be an engineering judgement. The data that all share a qualitative property form a .
Smoothing splineSmoothing splines are function estimates, , obtained from a set of noisy observations of the target , in order to balance a measure of goodness of fit of to with a derivative based measure of the smoothness of . They provide a means for smoothing noisy data. The most familiar example is the cubic smoothing spline, but there are many other possibilities, including for the case where is a vector quantity. Let be a set of observations, modeled by the relation where the are independent, zero mean random variables (usually assumed to have constant variance).
Canonical analysisIn statistics, canonical analysis (from κανων bar, measuring rod, ruler) belongs to the family of regression methods for data analysis. Regression analysis quantifies a relationship between a predictor variable and a criterion variable by the coefficient of correlation r, coefficient of determination r2, and the standard regression coefficient β. Multiple regression analysis expresses a relationship between a set of predictor variables and a single criterion variable by the multiple correlation R, multiple coefficient of determination R2, and a set of standard partial regression weights β1, β2, etc.
Projection pursuit regressionIn statistics, projection pursuit regression (PPR) is a statistical model developed by Jerome H. Friedman and Werner Stuetzle which is an extension of additive models. This model adapts the additive models in that it first projects the data matrix of explanatory variables in the optimal direction before applying smoothing functions to these explanatory variables. The model consists of linear combinations of ridge functions: non-linear transformations of linear combinations of the explanatory variables.
Backfitting algorithmIn statistics, the backfitting algorithm is a simple iterative procedure used to fit a generalized additive model. It was introduced in 1985 by Leo Breiman and Jerome Friedman along with generalized additive models. In most cases, the backfitting algorithm is equivalent to the Gauss–Seidel method, an algorithm used for solving a certain linear system of equations. Additive models are a class of non-parametric regression models of the form: where each is a variable in our -dimensional predictor , and is our outcome variable.
Deming regressionIn statistics, Deming regression, named after W. Edwards Deming, is an errors-in-variables model which tries to find the line of best fit for a two-dimensional dataset. It differs from the simple linear regression in that it accounts for errors in observations on both the x- and the y- axis. It is a special case of total least squares, which allows for any number of predictors and a more complicated error structure.