In multilinear algebra, the tensor rank decomposition or the decomposition of a tensor is the decomposition of a tensor in terms of a sum of minimum tensors. This is an open problem.
Canonical polyadic decomposition (CPD) is a variant of the rank decomposition which computes the best fitting terms for a user specified . The CP decomposition has found some applications in linguistics and chemometrics. The CP rank was introduced by Frank Lauren Hitchcock in 1927 and later rediscovered several times, notably in psychometrics.
The CP decomposition is referred to as CANDECOMP, PARAFAC, or CANDECOMP/PARAFAC (CP). PARAFAC2 rank decomposition is yet to explore.
Another popular generalization of the matrix SVD known as the higher-order singular value decomposition computes orthonormal mode matrices and has found applications in econometrics, signal processing, computer vision, computer graphics, psychometrics.
A scalar variable is denoted by lower case italic letters, and an upper bound scalar is denoted by an upper case italic letter, .
Indices are denoted by a combination of lowercase and upper case italic letters, . Multiple indices that one might encounter when referring to the multiple modes of a tensor are conveniently denoted by where .
A vector is denoted by a lower case bold Times Roman, and a matrix is denoted by bold upper case letters .
A higher order tensor is denoted by calligraphic letters,. An element of an -order tensor is denoted by or .
A data tensor is a collection of multivariate observations organized into a M-way array where M=C+1. Every tensor may be represented with a suitably large as a linear combination of rank-1 tensors:
where and where . When the number of terms is minimal in the above expression, then is called the rank of the tensor, and the decomposition is often referred to as a (tensor) rank decomposition, minimal CP decomposition, or Canonical Polyadic Decomposition (CPD). If the number of terms is not minimal, then the above decomposition is often referred to as CANDECOMP/PARAFAC, Polyadic decomposition'.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Machine learning and data analysis are becoming increasingly central in sciences including physics. In this course, fundamental principles and methods of machine learning will be introduced and practi
, , ,
Principal component analysis (PCA) is a popular technique for analyzing large datasets containing a high number of dimensions/features per observation, increasing the interpretability of data while preserving the maximum amount of information, and enabling the visualization of multidimensional data. Formally, PCA is a statistical technique for reducing the dimensionality of a dataset. This is accomplished by linearly transforming the data into a new coordinate system where (most of) the variation in the data can be described with fewer dimensions than the initial data.
Explores Singular Value Decomposition and Principal Component Analysis for dimensionality reduction, with applications in visualization and efficiency.
Tensor trains are a versatile tool to compress and work with high-dimensional data and functions. In this work we introduce the streaming tensor train approximation (STTA), a new class of algorithms for approximating a given tensor ' in the tensor train fo ...
Philadelphia2023
,
We introduce robust principal component analysis from a data matrix in which the entries of its columns have been corrupted by permutations, termed Unlabeled Principal Component Analysis (UPCA). Using algebraic geometry, we establish that UPCA is a well-de ...
Microtome Publ2024
The task of learning a quantum circuit to prepare a given mixed state is a fundamental quantum subroutine. We present a variational quantum algorithm (VQA) to learn mixed states which is suitable for near-term hardware. Our algorithm represents a generaliz ...