**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# Dynamical Approximation By Hierarchical Tucker And Tensor-Train Tensors

Journal paper

Abstract

We extend results on the dynamical low-rank approximation for the treatment of time-dependent matrices and tensors (Koch and Lubich; see [SIAM J. Matrix Anal. Appl., 29 (2007), pp. 434-454], [SIAM J. Matrix Anal. Appl., 31 (2010), pp. 2360-2375]) to the recently proposed hierarchical Tucker (HT) tensor format (Hackbusch and Kuhn; see [J. Fourier Anal. Appl., 15 (2009), pp. 706-722]) and the tensor train (TT) format (Oseledets; see [SIAM J. Sci. Comput., 33 (2011), pp. 2295-2317]), which are closely related to tensor decomposition methods used in quantum physics and chemistry. In this dynamical approximation approach, the time derivative of the tensor to be approximated is projected onto the time-dependent tangent space of the approximation manifold along the solution trajectory. This approach can be used to approximate the solutions to tensor differential equations in the HT or TT format and to compute updates in optimization algorithms within these reduced tensor formats. By deriving and analyzing the tangent space projector for the manifold of HT/TT tensors of fixed rank, we obtain curvature estimates, which allow us to obtain quasi-best approximation properties for the dynamical approximation, showing that the prospects and limitations of the ansatz are similar to those of the dynamical low rank approximation for matrices. Our results are exemplified by numerical experiments.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts

Loading

Related publications

Loading

Related publications (13)

Loading

Loading

Loading

The matrix completion problem consists of finding or approximating a low-rank matrix based on a few samples of this matrix. We propose a new algorithm for matrix completion that minimizes the least-square distance on the sampling set over the Riemannian manifold of fixed-rank matrices. The algorithm is an adaptation of classical nonlinear conjugate gradients, developed within the framework of retraction-based optimization on manifolds. We describe all the necessary objects from differential geometry necessary to perform optimization over this low-rank matrix manifold, seen as a submanifold embedded in the space of matrices. In particular, we describe how metric projection can be used as retraction and how vector transport lets us obtain the conjugate search directions. Finally, we prove convergence of a regularized version of our algorithm under the assumption that the restricted isometry property holds for incoherent matrices throughout the iterations. The numerical experiments indicate that our approach scales very well for large-scale problems and compares favorably with the state-of-the-art, while outperforming most existing solvers.

Related concepts (20)

In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects related to a vector space. Tensors may map between different objects such

In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point. More precisely, an n-dimensional manifold, or n-manifold for sho

In mathematics, low-rank approximation is a minimization problem, in which the cost function measures the fit between a given matrix (the data) and an approximating matrix (the optimization variable)

Michael Maximilian Steinlechner

Tensor completion aims to reconstruct a high-dimensional data set where the vast majority of entries is missing. The assumption of low-rank structure in the underlying original data allows us to cast the completion problem into an optimization problem restricted to the manifold of fixed-rank tensors. Elements of this smooth embedded submanifold can be efficiently represented in the tensor train or matrix product states format with storage complexity scaling linearly with the number of dimensions. We present a nonlinear conjugate gradient scheme within the framework of Riemannian optimization which exploits this favorable scaling. Numerical experiments and comparison to existing methods show the effectiveness of our approach for the approximation of multivariate functions. Finally, we show that our algorithm can obtain competitive reconstructions from uniform random sampling of few entries compared to adaptive sampling techniques such as cross-approximation.

Michael Maximilian Steinlechner

Tensor completion aims to reconstruct a high-dimensional data set with a large fraction of missing entries. The assumption of low-rank structure in the underlying original data allows us to cast the completion problem into an optimization problem restricted to the manifold of fixed-rank tensors. Elements of this smooth embedded submanifold can be efficiently represented in the tensor train (TT) or matrix product states (MPS) format with storage complexity scaling linearly with the number of dimensions. We present a nonlinear conjugate gradient scheme within the framework of Riemannian optimization which exploits this favorable scaling. Numerical experiments and comparison to existing methods show the effectiveness of our approach for the approximation of multivariate functions. Finally, we show that our algorithm can obtain competitive reconstructions from uniform random sampling of few entries of compared to adaptive sampling techniques such as cross-approximation.