**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# MATHICSE Technical Report: Block Krylov subspace methods for functions of matrices II: Modified block FOM

Abstract

We analyze an expansion of the generalized block Krylov subspace framework of [Electron.\ Trans.\ Numer.\ Anal., 47 (2017), pp. 100-126]. This expansion allows the use of low-rank modifications of the matrix projected onto the block Krylov subspace and contains, as special cases, the block GMRES method and the new block Radau-Arnoldi method. Within this general setting, we present results that extend the interpolation property from the non-block case to a matrix polynomial interpolation property for the block case, and we relate the eigenvalues of the projected matrix to the latent roots of these matrix polynomials. Some convergence results for these modified block FOM methods for solving linear system are presented. We then show how {\em cospatial} residuals can be preserved in the case of families of shifted linear block systems. This result is used to derive computationally practical restarted algorithms for block Krylov approximations that compute the action of a matrix function on a set of several vectors simultaneously. We prove some convergence results and present numerical results showing that two modifications of FOM, the block harmonic and the block Radau-Arnoldi methods for matrix functions, can significantly improve the convergence behavior.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts

Loading

Related publications

Loading

Related concepts (15)

Related publications (2)

Linear system

In systems theory, a linear system is a mathematical model of a system based on the use of a linear operator.
Linear systems typically exhibit features and properties that are much simpler than the n

Generalized minimal residual method

In mathematics, the generalized minimal residual method (GMRES) is an iterative method for the numerical solution of an indefinite nonsymmetric system of linear equations. The method approximates the

Matrix (mathematics)

In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a pr

Loading

Loading

The focus of this thesis is on developing efficient algorithms for two important problems arising in model reduction, estimation of the smallest eigenvalue for a parameter-dependent Hermitian matrix and solving large-scale linear matrix equations, by extracting and exploiting underlying low-rank properties. Availability of reliable and efficient algorithms for estimating the smallest eigenvalue of a parameter-dependent Hermitian matrix $A(\mu)$ for many parameter values $\mu$ is important in a variety of applications. Most notably, it plays a crucial role in \textit{a posteriori} estimation of reduced basis methods for parametrized partial differential equations. We propose a novel subspace approach, which builds upon the current state-of-the-art approach, the Successive Constraint Method (SCM), and improves it by additionally incorporating the sampled smallest eigenvectors and implicitly exploiting their smoothness properties. Like SCM, our approach also provides rigorous lower and upper bounds for the smallest eigenvalues on the parameter domain $D$. We present theoretical and experimental evidence to demonstrate that our approach represents a significant improvement over SCM in the sense that the bounds are often much tighter, at a negligible additional cost. We have successfully applied the approach to computation of the coercivity and the inf-sup constants, as well as computation of $\varepsilon$-pseudospectra. Solving an $m\times n$ linear matrix equation $A_1 X B_1^T + \cdots + A_K X B_K^T = C$ as an $m n \times m n$ linear system, typically limits the feasible values of $m,n$ to a few hundreds at most. We propose a new approach, which exploits the fact that the solution $X$ can often be well approximated by a low-rank matrix, and computes it by combining greedy low-rank techniques with Galerkin projection as well as preconditioned gradients. This can be implemented in a way where only linear systems of size $m \times m$ and $n \times n$ need to be solved. Moreover, these linear systems inherit the sparsity of the coefficient matrices, which allows to address linear matrix equations as large as $m = n = O(10^5)$. Numerical experiments demonstrate that the proposed methods perform well for generalized Lyapunov equations, as well as for the standard Lyapunov equations. Finally, we combine the ideas used for addressing matrix equations and parameter-dependent eigenvalue problems, and propose a low-rank reduced basis approach for solving parameter-dependent Lyapunov equations.

In this thesis we address the numerical approximation of the incompressible Navier-Stokes equations evolving in a moving domain with the spectral element method and high order time integrators. First, we present the spectral element method and the basic tools to perform spectral discretizations of the Galerkin or Galerkin with Numerical Integration (G-NI) type. We cover a large range of possibilities regarding the reference elements, basis functions, interpolation points and quadrature points. In this approach, the integration and differentiation of the polynomial functions is done numerically through the help of suitable point sets. Regarding the differentiation, we present a detailed numerical study of which points should be used to attain better stability (among the choices we present). Second, we introduce the incompressible steady/unsteady Stokes and Navier-Stokes equations and their spectral approximation. In the unsteady case, we introduce a combination of Backward Differentiation Formulas and an extrapolation formula of the same order for the time integration. Once the equations are discretized, a linear system must be solved to obtain the approximate solution. In this context, we consider the solution of the whole system of equations combined with a block type preconditioner. The preconditioner is shown to be optimal in terms of number of iterations used by the GMRES method in the steady case, but not in the unsteady one. Another alternative presented is to use algebraic factorization methods of the Yosida type and decouple the calculation of velocity and pressure. A benchmark is also presented to access the numerical convergence properties of this type of methods in our context. Third, we extend the algorithms developed in the fixed domain case to the Arbitrary Lagrangian Eulerian framework. The issue of defining a high order ALE map is addressed. This allows to construct a computational domain that is described with curved elements. A benchmark using a direct method to solve the linear system or the Yosida-q methods is presented to show the convergence orders of the method proposed. Finally, we apply the developed method with an implicit fully coupled and semi-implicit approach, to solve a fluid-structure interaction problem for a simple 2D hemodynamics example.