**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Publication# Compress-and-restart block Krylov subspace methods for Sylvester matrix equations

Résumé

Block Krylov subspace methods (KSMs) comprise building blocks in many state-of-the-art solvers for large-scale matrix equations as they arise, for example, from the discretization of partial differential equations. While extended and rational block Krylov subspace methods provide a major reduction in iteration counts over polynomial block KSMs, they also require reliable solvers for the coefficient matrices, and these solvers are often iterative methods themselves. It is not hard to devise scenarios in which the available memory, and consequently the dimension of the Krylov subspace, is limited. In such scenarios for linear systems and eigenvalue problems, restarting is a well-explored technique for mitigating memory constraints. In this work, such restarting techniques are applied to polynomial KSMs for matrix equations with a compression step to control the growing rank of the residual. An error analysis is also performed, leading to heuristics for dynamically adjusting the basis size in each restart cycle. A panel of numerical experiments demonstrates the effectiveness of the new method with respect to extended block KSMs.

Official source

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Concepts associés

Chargement

Publications associées

Chargement

Concepts associés (14)

Matrice (mathématiques)

thumb|upright=1.5
En mathématiques, les matrices sont des tableaux d'éléments (nombres, caractères) qui servent à interpréter en termes calculatoires, et donc opérationnels, les résultats théoriques

Équation aux dérivées partielles

En mathématiques, plus précisément en calcul différentiel, une équation aux dérivées partielles (parfois appelée équation différentielle partielle et abrégée en EDP) est une équation différentielle do

Système linéaire

Un système linéaire (le terme système étant pris au sens de l'automatique, à savoir un système dynamique) est un objet du monde matériel qui peut être décrit par des équations linéaires (équations l

Publications associées (11)

Chargement

Chargement

Chargement

The focus of this thesis is on developing efficient algorithms for two important problems arising in model reduction, estimation of the smallest eigenvalue for a parameter-dependent Hermitian matrix and solving large-scale linear matrix equations, by extracting and exploiting underlying low-rank properties. Availability of reliable and efficient algorithms for estimating the smallest eigenvalue of a parameter-dependent Hermitian matrix $A(\mu)$ for many parameter values $\mu$ is important in a variety of applications. Most notably, it plays a crucial role in \textit{a posteriori} estimation of reduced basis methods for parametrized partial differential equations. We propose a novel subspace approach, which builds upon the current state-of-the-art approach, the Successive Constraint Method (SCM), and improves it by additionally incorporating the sampled smallest eigenvectors and implicitly exploiting their smoothness properties. Like SCM, our approach also provides rigorous lower and upper bounds for the smallest eigenvalues on the parameter domain $D$. We present theoretical and experimental evidence to demonstrate that our approach represents a significant improvement over SCM in the sense that the bounds are often much tighter, at a negligible additional cost. We have successfully applied the approach to computation of the coercivity and the inf-sup constants, as well as computation of $\varepsilon$-pseudospectra. Solving an $m\times n$ linear matrix equation $A_1 X B_1^T + \cdots + A_K X B_K^T = C$ as an $m n \times m n$ linear system, typically limits the feasible values of $m,n$ to a few hundreds at most. We propose a new approach, which exploits the fact that the solution $X$ can often be well approximated by a low-rank matrix, and computes it by combining greedy low-rank techniques with Galerkin projection as well as preconditioned gradients. This can be implemented in a way where only linear systems of size $m \times m$ and $n \times n$ need to be solved. Moreover, these linear systems inherit the sparsity of the coefficient matrices, which allows to address linear matrix equations as large as $m = n = O(10^5)$. Numerical experiments demonstrate that the proposed methods perform well for generalized Lyapunov equations, as well as for the standard Lyapunov equations. Finally, we combine the ideas used for addressing matrix equations and parameter-dependent eigenvalue problems, and propose a low-rank reduced basis approach for solving parameter-dependent Lyapunov equations.

The multiquery solution of parametric partial differential equations (PDEs), that is, PDEs depending on a vector of parameters, is computationally challenging and appears in several engineering contexts, such as PDE-constrained optimization, uncertainty quantification or sensitivity analysis. When using the finite element (FE) method as approximation technique, an algebraic system must be solved for each instance of the parameter, leading to a critical bottleneck when we are in a multiquery context, a problem which is even more emphasized when dealing with nonlinear or time dependent PDEs. Several techniques have been proposed to deal with sequences of linear systems, such as truncated Krylov subspace recycling methods, deflated restarting techniques and approximate inverse preconditioners; however, these techniques do not satisfactorily exploit the parameter dependence. More recently, the reduced basis (RB) method, together with other reduced order modeling (ROM) techniques, emerged as an efficient tool to tackle parametrized PDEs.
In this thesis, we investigate a novel preconditioning strategy for parametrized systems which arise from the FE discretization of parametrized PDEs. Our preconditioner combines multiplicatively a RB coarse component, which is built upon the RB method, and a nonsingular fine grid preconditioner. The proposed technique hinges upon the construction of a new Multi Space Reduced Basis (MSRB) method, where a RB solver is built at each step of the chosen iterative method and trained to accurately solve the error equation.
The resulting preconditioner directly exploits the parameter dependence, since it is tailored to the class of problems at hand, and significantly speeds up the solution of the parametrized linear system.
We analyze the proposed preconditioner from a theoretical standpoint, providing assumptions which lead to its well-posedness and efficiency.
We apply our strategy to a broad range of problems described by parametrized PDEs:
(i) elliptic problems such as advection-diffusion-reaction equations, (ii) evolution problems such as time-dependent advection-diffusion-reaction equations or linear elastodynamics equations (iii) saddle-point problems such as Stokes equations, and, finally, (iv) Navier-Stokes equations.
Even though the structure of the preconditioner is similar for all these classes of problems, its fine and coarse components must be accurately chosen in order to provide the best possible results.
Several comparisons are made with respect to the current state-of-the-art preconditioning and ROM techniques.
Finally, we employ the proposed technique to speed up the solution of problems in the field of cardiovascular modeling.

Daniel Kressner, Stefano Massei

Linear matrix equations, such as the Sylvester and Lyapunov equations, play an important role in various applications, including the stability analysis and dimensionality reduction of linear dynamical control systems and the solution of partial differential equations. In this work, we present and analyze a new algorithm, based on tensorized Krylov subspaces, for quickly updating the solution of such a matrix equation when its coefficients undergo low-rank changes. We demonstrate how our algorithm can be utilized to accelerate the Newton method for solving continuous-time algebraic Riccati equations. Our algorithm also forms the basis of a new divide-and-conquer approach for linear matrix equations with coefficients that feature hierarchical low-rank structure, such as hierarchically off-diagonal low-rank structures, hierarchically semiseparable, and banded matrices. Numerical experiments demonstrate the advantages of divide-and-conquer over existing approaches, in terms of computational time and memory consumption.