**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# Low-rank methods for parameter-dependent eigenvalue problems and matrix equations

Abstract

The focus of this thesis is on developing efficient algorithms for two important problems arising in model reduction, estimation of the smallest eigenvalue for a parameter-dependent Hermitian matrix and solving large-scale linear matrix equations, by extracting and exploiting underlying low-rank properties. Availability of reliable and efficient algorithms for estimating the smallest eigenvalue of a parameter-dependent Hermitian matrix $A(\mu)$ for many parameter values $\mu$ is important in a variety of applications. Most notably, it plays a crucial role in \textit{a posteriori} estimation of reduced basis methods for parametrized partial differential equations. We propose a novel subspace approach, which builds upon the current state-of-the-art approach, the Successive Constraint Method (SCM), and improves it by additionally incorporating the sampled smallest eigenvectors and implicitly exploiting their smoothness properties. Like SCM, our approach also provides rigorous lower and upper bounds for the smallest eigenvalues on the parameter domain $D$. We present theoretical and experimental evidence to demonstrate that our approach represents a significant improvement over SCM in the sense that the bounds are often much tighter, at a negligible additional cost. We have successfully applied the approach to computation of the coercivity and the inf-sup constants, as well as computation of $\varepsilon$-pseudospectra. Solving an $m\times n$ linear matrix equation $A_1 X B_1^T + \cdots + A_K X B_K^T = C$ as an $m n \times m n$ linear system, typically limits the feasible values of $m,n$ to a few hundreds at most. We propose a new approach, which exploits the fact that the solution $X$ can often be well approximated by a low-rank matrix, and computes it by combining greedy low-rank techniques with Galerkin projection as well as preconditioned gradients. This can be implemented in a way where only linear systems of size $m \times m$ and $n \times n$ need to be solved. Moreover, these linear systems inherit the sparsity of the coefficient matrices, which allows to address linear matrix equations as large as $m = n = O(10^5)$. Numerical experiments demonstrate that the proposed methods perform well for generalized Lyapunov equations, as well as for the standard Lyapunov equations. Finally, we combine the ideas used for addressing matrix equations and parameter-dependent eigenvalue problems, and propose a low-rank reduced basis approach for solving parameter-dependent Lyapunov equations.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts

Loading

Related publications

Loading

Related concepts (25)

Partial differential equation

In mathematics, a partial differential equation (PDE) is an equation which computes a function between various partial derivatives of a multivariable function.
The function is often thought of as

Smoothness

In mathematical analysis, the smoothness of a function is a property measured by the number of continuous derivatives it has over some domain, called differentiability class. At the very minimum, a

Galerkin method

In mathematics, in the area of numerical analysis, Galerkin methods are named after the Soviet mathematician Boris Galerkin. They convert a continuous operator problem, such as a differential equati

Related publications (35)

Loading

Loading

Loading

In this thesis, we study two distinct problems.
The first problem consists of studying the linear system of partial differential equations which consists of taking a k-form, and applying the exterior derivative 'd' to it and add the wedge product with a 1-form 'a'. The study of this differential operator is linked to the study of the multiplication by a two form, that is the system of linear equations where we take a k-form and apply the exterior wedge product by 'da', the exterior derivative of 'a'. We establish links between the partial differential equation and the linear system.
The second problem is a generalization of the symmetric gradient and the curl equation. The equation of a symmetric gradient consists of taking a vector field, apply the gradient and then add the transpose of the gradient, whereas in the curl equation we subtract the transpose of the gradient. Both can be seen as an equation of the form A * grad u + (grad u)t * A, where A is a symmetric matrix for the case of the symmetric gradient and skew symmetric for the curl equation. We generalize to the case where A verifies no symmetry assumption and more significantly add a Dirichlet condition on the boundary.

Daniel Kressner, Petar Sirkovic

This work is concerned with the numerical solution of large-scale linear matrix equations A1XB1T++AKXBKT=C. The most straightforward approach computes XRmxn from the solution of an mn x mn linear system, typically limiting the feasible values of m,n to a few hundreds at most. Our new approach exploits the fact that X can often be well approximated by a low-rank matrix. It combines greedy low-rank techniques with Galerkin projection and preconditioned gradients. In turn, only linear systems of size m x m and n x n need to be solved. Moreover, these linear systems inherit the sparsity of the coefficient matrices, which allows to address linear matrix equations as large as m = n = O(10(5)). Numerical experiments demonstrate that the proposed methods perform well for generalized Lyapunov equations. Even for the case of standard Lyapunov equations, our methods can be advantageous, as we do not need to assume that C has low rank. Copyright (c) 2015 John Wiley & Sons, Ltd.

Daniel Kressner, Petar Sirkovic

This work is concerned with approximating the smallest eigenvalue of a parameter-dependent Hermitian matrix A(mu) for many parameter values mu in a domain D subset of R-P. The design of reliable and efficient algorithms for addressing this task is of importance in a variety of applications. Most notably, it plays a crucial role in estimating the error of reduced basis methods for parametrized partial differential equations. The current state-of-the-art approach, the so-called successive constraint method (SCM), addresses affine linear parameter dependencies by combining sampled Rayleigh quotients with linear programming techniques. In this work, we propose a subspace approach that additionally incorporates the sampled eigenvectors of A(mu) and implicitly exploits their smoothness properties. Like SCM, our approach results in rigorous lower and upper bounds for the smallest eigenvalues on D. Theoretical and experimental evidence is given to demonstrate that our approach represents a significant improvement over SCM in the sense that the bounds are often much tighter, at negligible additional cost.