Analyse de la varianceEn statistique, lanalyse de la variance (terme souvent abrégé par le terme anglais ANOVA : analysis of variance) est un ensemble de modèles statistiques utilisés pour vérifier si les moyennes des groupes proviennent d'une même population. Les groupes correspondent aux modalités d'une variable qualitative (p. ex. variable : traitement; modalités : programme d'entrainement sportif, suppléments alimentaires; placebo) et les moyennes sont calculés à partir d'une variable continue (p. ex. gain musculaire).
SPSSSPSS (Statistical Package for the Social Sciences) est un logiciel utilisé pour l'analyse statistique. C'est aussi le nom de la société qui le revend (SPSS Inc). En 2009, la compagnie décide de changer le nom de ses produits en PASW, pour Predictive Analytics Software et est rachetée par IBM pour 1,24 milliard de dollars. La première version de SPSS a été mise en vente en 1968 et fait partie des programmes utilisés pour l'analyse statistique en sciences sociales.
Pooled varianceIn statistics, pooled variance (also known as combined variance, composite variance, or overall variance, and written ) is a method for estimating variance of several different populations when the mean of each population may be different, but one may assume that the variance of each population is the same. The numerical estimate resulting from the use of this method is also called the pooled variance. Under the assumption of equal population variances, the pooled sample variance provides a higher precision estimate of variance than the individual sample variances.
Location testA location test is a statistical hypothesis test that compares the location parameter of a statistical population to a given constant, or that compares the location parameters of two statistical populations to each other. Most commonly, the location parameter (or parameters) of interest are expected values, but location tests based on medians or other measures of location are also used. The one-sample location test compares the location parameter of one sample to a given constant.
Fisher's methodIn statistics, Fisher's method, also known as Fisher's combined probability test, is a technique for data fusion or "meta-analysis" (analysis of analyses). It was developed by and named for Ronald Fisher. In its basic form, it is used to combine the results from several independence tests bearing upon the same overall hypothesis (H0). Fisher's method combines extreme value probabilities from each test, commonly known as "p-values", into one test statistic (X2) using the formula where pi is the p-value for the ith hypothesis test.
Hotelling's T-squared distributionIn statistics, particularly in hypothesis testing, the Hotelling's T-squared distribution (T2), proposed by Harold Hotelling, is a multivariate probability distribution that is tightly related to the F-distribution and is most notable for arising as the distribution of a set of sample statistics that are natural generalizations of the statistics underlying the Student's t-distribution. The Hotelling's t-squared statistic (t2) is a generalization of Student's t-statistic that is used in multivariate hypothesis testing.
Pearson correlation coefficientIn statistics, the Pearson correlation coefficient (PCC) is a correlation coefficient that measures linear correlation between two sets of data. It is the ratio between the covariance of two variables and the product of their standard deviations; thus, it is essentially a normalized measurement of the covariance, such that the result always has a value between −1 and 1. As with covariance itself, the measure can only reflect a linear correlation of variables, and ignores many other types of relationships or correlations.
Test des rangs signés de WilcoxonEn statistique, le test des rangs signés de Wilcoxon est une alternative non-paramétrique au test de Student pour des échantillons appariés. Le test s'intéresse à un paramètre de position : la médiane, le but étant de tester s'il existe un changement sur la médiane. La procédure considère que les variables étudiées ont été mesurées sur une échelle permettant d'ordonner les observations en rangs pour chaque variable (c'est-à-dire une échelle ordinale) et que les différences de rangs entre variables ont un sens.
Test de BartlettEn statistique, le test de Bartlett du nom du statisticien anglais Maurice Stevenson Bartlett ( – ) est utilisé en statistique pour évaluer si k échantillons indépendants sont issus de populations de même variance (condition dite d'homoscédasticité). C'est un test paramétrique. Tout comme le test de Fisher, le test d'égalité des variances de Bartlett s'effondre totalement dès que l'on s'écarte, même légèrement, de la distribution gaussienne.
Simple linear regressionIn statistics, simple linear regression is a linear regression model with a single explanatory variable. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable (conventionally, the x and y coordinates in a Cartesian coordinate system) and finds a linear function (a non-vertical straight line) that, as accurately as possible, predicts the dependent variable values as a function of the independent variable. The adjective simple refers to the fact that the outcome variable is related to a single predictor.