Multilinear subspace learning is an approach for disentangling the causal factor of data formation and performing dimensionality reduction.
The Dimensionality reduction can be performed on a data tensor that contains a collection of observations have been vectorized, or observations that are treated as matrices and concatenated into a data tensor. Here are some examples of data tensors whose observations are vectorized or whose observations are matrices concatenated into data tensor s (2D/3D), video sequences (3D/4D), and hyperspectral cubes (3D/4D).
The mapping from a high-dimensional vector space to a set of lower dimensional vector spaces is a multilinear projection. When observations are retained in the same organizational structure as matrices or higher order tensors, their representations are computed by performing linear projections into the column space, row space and fiber space.
Multilinear subspace learning algorithms are higher-order generalizations of linear subspace learning methods such as principal component analysis (PCA), independent component analysis (ICA), linear discriminant analysis (LDA) and canonical correlation analysis (CCA).
Multilinear methods may be causal in nature and perform causal inference, or they may be simple regression methods from which no causal conclusion are drawn.
Linear subspace learning algorithms are traditional dimensionality reduction techniques that are well suited for datasets that are the result of varying a single causal factor. Unfortunately, they often become inadequate when dealing with datasets that are the result of multiple causal factors. .
Multilinear subspace learning can be applied to observations whose measurements were vectorized and organized into a data tensor for causally aware dimensionality reduction. These methods may also be employed in reducing horizontal and vertical redundancies irrespective of the causal factors when the observations are treated as a "matrix" (ie. a collection of independent column/row observations) and concatenated into a tensor.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
In mathematics, Tucker decomposition decomposes a tensor into a set of matrices and one small core tensor. It is named after Ledyard R. Tucker although it goes back to Hitchcock in 1927. Initially described as a three-mode extension of factor analysis and principal component analysis it may actually be generalized to higher mode analysis, which is also called higher-order singular value decomposition (HOSVD). It may be regarded as a more flexible PARAFAC (parallel factor analysis) model.
In multilinear algebra, a tensor decomposition is any scheme for expressing a "data tensor" (M-way array) as a sequence of elementary operations acting on other, often simpler tensors. Many tensor decompositions generalize some matrix decompositions. Tensors are generalizations of matrices to higher dimensions (or rather to higher orders, i.e. the higher number of dimensions) and can consequently be treated as multidimensional fields.
In multilinear algebra, the higher-order singular value decomposition (HOSVD) of a tensor is a specific orthogonal Tucker decomposition. It may be regarded as one type of generalization of the matrix singular value decomposition. It has applications in computer vision, computer graphics, machine learning, scientific computing, and signal processing. Some aspects can be traced as far back as F. L. Hitchcock in 1928, but it was L. R. Tucker who developed for third-order tensors the general Tucker decomposition in the 1960s, further advocated by L.
L'objectif du cours est d'introduire les notions de base de l'algèbre linéaire et ses applications.
Ce cours est une introduction à la géométrie différentielle classique des courbes et des surfaces, principalement dans le plan et l'espace euclidien.
In this course, students learn to design and master algorithms and core concepts related to inference and learning from data and the foundations of adaptation and learning theories with applications.
We present a two-staged statistical and geospatial analysis exploring the discrepancies of household electricity tariffs across 1,913 Swiss municipalities. First, we perform a multilinear regression analysis, considering structural, sociodemographic data a ...
Speech recognition-based applications upon the advancements in artificial intelligence play an essential role to transform most aspects of modern life. However, speech recognition in real-life conditions (e.g., in the presence of overlapping speech, varyin ...
EPFL2023
,
Reduced-order models are indispensable for multi-query or real-time problems. However, there are still many challenges to constructing efficient ROMs for time-dependent parametrized problems. Using a linear reduced space is inefficient for time-dependent n ...