In multilinear algebra, the higher-order singular value decomposition (HOSVD) of a tensor is a specific orthogonal Tucker decomposition. It may be regarded as one type of generalization of the matrix singular value decomposition. It has applications in computer vision, computer graphics, machine learning, scientific computing, and signal processing. Some aspects can be traced as far back as F. L. Hitchcock in 1928, but it was L. R. Tucker who developed for third-order tensors the general Tucker decomposition in the 1960s, further advocated by L. De Lathauwer et al. in their Multilinear SVD work that employs the power method, or advocated by Vasilescu and Terzopoulos that developed M-mode SVD a parallel algorithm that employs the matrix SVD.
The term higher order singular value decomposition (HOSVD) was coined be DeLathauwer, but the algorithm referred to commonly in the literature as the HOSVD and attributed to either Tucker or DeLathauwer was developed by Vasilescu and Terzopoulos. Robust and L1-norm-based variants of HOSVD have also been proposed.
For the purpose of this article, the abstract tensor is assumed to be given in coordinates with respect to some basis as a M-way array, also denoted by , where M is the number of modes and the order of the tensor. is the complex numbers and it includes both the real numbers and the pure imaginary numbers.
Let be a unitary matrix containing a basis of the left singular vectors of the standard mode-m flattening of such that the jth column of corresponds to the jth largest singular value of . Observe that the mode/factor matrix does not depend on the particular on the specific definition of the mode m flattening. By the properties of the multilinear multiplication, we havewhere denotes the conjugate transpose. The second equality is because the 's are unitary matrices. Define now the core tensorThen, the HOSVD of is the decomposition The above construction shows that every tensor has a HOSVD.
As in the case of the compact singular value decomposition of a matrix, it is also possible to consider a compact HOSVD, which is very useful in applications.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Multilinear subspace learning is an approach for disentangling the causal factor of data formation and performing dimensionality reduction. The Dimensionality reduction can be performed on a data tensor that contains a collection of observations have been vectorized, or observations that are treated as matrices and concatenated into a data tensor. Here are some examples of data tensors whose observations are vectorized or whose observations are matrices concatenated into data tensor s (2D/3D), video sequences (3D/4D), and hyperspectral cubes (3D/4D).
Multiple tensor-times-matrix (Multi-TTM) is a key computation in algorithms for computing and operating with the Tucker tensor decomposition, which is frequently used in multidimensional data analysis. We establish communication lower bounds that determine ...
A key challenge across many disciplines is to extract meaningful information from data which is often obscured by noise. These datasets are typically represented as large matrices. Given the current trend of ever-increasing data volumes, with datasets grow ...
In this thesis, we propose and analyze novel numerical algorithms for solving three different high-dimensional problems involving tensors. The commonality of these problems is that the tensors can potentially be well approximated in low-rank formats. Ident ...