In mathematics, Tucker decomposition decomposes a tensor into a set of matrices and one small core tensor. It is named after Ledyard R. Tucker
although it goes back to Hitchcock in 1927.
Initially described as a three-mode extension of factor analysis and principal component analysis it may actually be generalized to higher mode analysis, which is also called higher-order singular value decomposition (HOSVD).
It may be regarded as a more flexible PARAFAC (parallel factor analysis) model. In PARAFAC the core tensor is restricted to be "diagonal".
In practice, Tucker decomposition is used as a modelling tool. For instance, it is used to model three-way (or higher way) data by means of relatively small numbers of components for each of the three or more modes, and the components are linked to each other by a three- (or higher-) way core array. The model parameters are estimated in such a way that, given fixed numbers of components, the modelled data optimally resemble the actual data in the least squares sense. The model gives a summary of the information in the data, in the same way as principal components analysis does for two-way data.
For a 3rd-order tensor , where is either or , Tucker Decomposition can be denoted as follows,
where is the core tensor, a 3rd-order tensor that contains the 1-mode, 2-mode and 3-mode singular values of , which are defined as the Frobenius norm of the 1-mode, 2-mode and 3-mode slices of tensor respectively. are unitary matrices in respectively. The j-mode product (j = 1, 2, 3) of by is denoted as with entries as
Taking for all is always sufficient to represent exactly, but often can be compressed or efficiently approximately by choosing . A common choice is , which can be effective when the difference in dimension sizes is large.
There are two special cases of Tucker decomposition:
Tucker1: if and are identity, then
Tucker2: if is identity, then .
RESCAL decomposition can be seen as a special case of Tucker where is identity and is equal to .