In statistics, particularly in hypothesis testing, the Hotelling's T-squared distribution (T2), proposed by Harold Hotelling, is a multivariate probability distribution that is tightly related to the F-distribution and is most notable for arising as the distribution of a set of sample statistics that are natural generalizations of the statistics underlying the Student's t-distribution.
The Hotelling's t-squared statistic (t2) is a generalization of Student's t-statistic that is used in multivariate hypothesis testing.
The distribution arises in multivariate statistics in undertaking tests of the differences between the (multivariate) means of different populations, where tests for univariate problems would make use of a t-test.
The distribution is named for Harold Hotelling, who developed it as a generalization of Student's t-distribution.
If the vector is Gaussian multivariate-distributed with zero mean and unit covariance matrix and is a matrix with unit scale matrix and m degrees of freedom with a Wishart distribution , then the quadratic form has a Hotelling distribution (with parameters and ):
Furthermore, if a random variable X has Hotelling's T-squared distribution, , then:
where is the F-distribution with parameters p and m−p+1.
Let be the sample covariance:
where we denote transpose by an apostrophe. It can be shown that is a positive (semi) definite matrix and follows a p-variate Wishart distribution with n−1 degrees of freedom.
The sample covariance matrix of the mean reads .
The Hotelling's t-squared statistic is then defined as:
which is proportional to the distance between the sample mean and . Because of this, one should expect the statistic to assume low values if , and high values if they are different.
From the distribution,
where is the F-distribution with parameters p and n − p.
In order to calculate a p-value (unrelated to p variable here), note that the distribution of equivalently implies that
Then, use the quantity on the left hand side to evaluate the p-value corresponding to the sample, which comes from the F-distribution.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
This course addresses the relationship between specific technological features and the learners' cognitive processes. It also covers the methods and results of empirical studies on this topic: do stud
This course is neither an introduction to the mathematics of statistics nor an introduction to a statistics program such as R. The aim of the course is to understand statistics from its experimental d
Multivariate statistics focusses on inferring the joint distributional properties of several random variables, seen as random vectors, with a main focus on uncovering their underlying dependence struc
A t-test is a type of statistical analysis used to compare the averages of two groups and determine if the differences between them are more likely to arise from random chance. It is any statistical hypothesis test in which the test statistic follows a Student's t-distribution under the null hypothesis. It is most commonly applied when the test statistic would follow a normal distribution if the value of a scaling term in the test statistic were known (typically, the scaling term is unknown and therefore a nuisance parameter).
Explores various continuous probability distributions and expectation of random variables.
,
Background: Quantification of the T2 signal by means of T2 mapping in acute pancreatitis (AP) has the potential to quantify the parenchymal edema. Quantitative T2 mapping may overcome the limitations of previously reported scoring systems for reliable asse ...
Hoboken2024
, ,
The state-of-the-art methods for estimating high-dimensional covariance matrices all shrink the eigenvalues of the sample covariance matrix towards a data-insensitive shrinkage target. The underlying shrinkage transformation is either chosen heuristically ...
A kernel method for estimating a probability density function from an independent and identically distributed sample drawn from such density is presented. Our estimator is a linear combination of kernel functions, the coefficients of which are determined b ...