**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.

Concept# Low-rank approximation

Summary

In mathematics, low-rank approximation is a minimization problem, in which the cost function measures the fit between a given matrix (the data) and an approximating matrix (the optimization variable), subject to a constraint that the approximating matrix has reduced rank. The problem is used for mathematical modeling and data compression. The rank constraint is related to a constraint on the complexity of a model that fits the data. In applications, often there are other constraints on the approximating matrix apart from the rank constraint, e.g., non-negativity and Hankel structure.
Low-rank approximation is closely related to numerous other techniques, including principal component analysis, factor analysis, total least squares, latent semantic analysis, orthogonal regression, and dynamic mode decomposition.
Given
structure specification ,
vector of structure parameters ,
norm , and
desired rank ,
Linear system identification, in which case the approximating matrix is Hankel structured.
Machine learning, in which case the approximating matrix is nonlinearly structured.
Recommender systems, in which cases the data matrix has missing values and the approximation is categorical.
Distance matrix completion, in which case there is a positive definiteness constraint.
Natural language processing, in which case the approximation is nonnegative.
Computer algebra, in which case the approximation is Sylvester structured.
The unstructured problem with fit measured by the Frobenius norm, i.e.,
has analytic solution in terms of the singular value decomposition of the data matrix. The result is referred to as the matrix approximation lemma or Eckart–Young–Mirsky theorem. This problem was originally solved by Erhard Schmidt in the infinite dimensional context of integral operators (although his methods easily generalize to arbitrary compact operators on Hilbert spaces) and later rediscovered by C. Eckart and G. Young. L. Mirsky generalized the result to arbitrary unitarily invariant norms.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications (27)

Related concepts (1)

Related people (4)

Related courses (1)

Related lectures (15)

Singular value decomposition

In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition. Specifically, the singular value decomposition of an complex matrix M is a factorization of the form where U is an complex unitary matrix, is an rectangular diagonal matrix with non-negative real numbers on the diagonal, V is an complex unitary matrix, and is the conjugate transpose of V.

PHYS-467: Machine learning for physicists

Machine learning and data analysis are becoming increasingly central in sciences including physics. In this course, fundamental principles and methods of machine learning will be introduced and practi

Explores low rank approximations, spectral theorems, and orthogonality in matrices.

Explores Singular Value Decomposition and Principal Component Analysis for dimensionality reduction, with applications in visualization and efficiency.

Explores Singular Value Decomposition theory, properties, uniqueness, matrix approximation, and dimensionality reduction applications.

In this thesis we will present and analyze randomized algorithms for numerical linear algebra problems. An important theme in this thesis is randomized low-rank approximation. In particular, we will study randomized low-rank approximation of matrix functio ...

Mathieu Salzmann, Zheng Dang, Yu Guo

In this work, we tackle the task of estimating the 6D pose of an object from point cloud data. While recent learning-based approaches have shown remarkable success on synthetic datasets, we have observed them to fail in the presence of real-world data. We ...

The quantification of uncertainties can be particularly challenging for problems requiring long-time integration as the structure of the random solution might considerably change over time. In this respect, dynamical low-rank approximation (DLRA) is very a ...