Summary
In linear algebra, the order-r Krylov subspace generated by an n-by-n matrix A and a vector b of dimension n is the linear subspace spanned by the of b under the first r powers of A (starting from ), that is, The concept is named after Russian applied mathematician and naval engineer Alexei Krylov, who published a paper about it in 1931. Vectors are linearly independent until , and . Thus, denotes the maximal dimension of a Krylov subspace. The maximal dimension satisfies and . More exactly, , where is the minimal polynomial of . Furthermore, there exists a such that . is a cyclic submodule generated by of the torsion -module , where is the linear space on . can be decomposed as the direct sum of Krylov subspaces. Krylov subspaces are used in algorithms for finding approximate solutions to high-dimensional linear algebra problems. Many linear dynamical system tests in control theory, especially those related to controllability and observability, involve checking the rank of the Krylov subspace. These tests are equivalent to finding the span of the Gramians associated with the system/output maps so the uncontrollable and unobservable subspaces are simply the orthogonal complement to the Krylov subspace. Modern iterative methods such as Arnoldi iteration can be used for finding one (or a few) eigenvalues of large sparse matrices or solving large systems of linear equations. They try to avoid matrix-matrix operations, but rather multiply vectors by the matrix and work with the resulting vectors. Starting with a vector , one computes , then one multiplies that vector by to find and so on. All algorithms that work this way are referred to as Krylov subspace methods; they are among the most successful methods currently available in numerical linear algebra. These methods can be used in situations where there is an algorithm to compute the matrix-vector multiplication without there being an explicit representation of , giving rise to Matrix-free methods.
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related publications (50)
Related concepts (7)
Arnoldi iteration
In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general (possibly non-Hermitian) matrices by constructing an orthonormal basis of the Krylov subspace, which makes it particularly useful when dealing with large sparse matrices. The Arnoldi method belongs to a class of linear algebra algorithms that give a partial result after a small number of iterations, in contrast to so-called direct methods which must complete to give any useful results (see for example, Householder transformation).
Lanczos algorithm
The Lanczos algorithm is an iterative method devised by Cornelius Lanczos that is an adaptation of power methods to find the "most useful" (tending towards extreme highest/lowest) eigenvalues and eigenvectors of an Hermitian matrix, where is often but not necessarily much smaller than . Although computationally efficient in principle, the method as initially formulated was not useful, due to its numerical instability. In 1970, Ojalvo and Newman showed how to make the method numerically stable and applied it to the solution of very large engineering structures subjected to dynamic loading.
Generalized minimal residual method
In mathematics, the generalized minimal residual method (GMRES) is an iterative method for the numerical solution of an indefinite nonsymmetric system of linear equations. The method approximates the solution by the vector in a Krylov subspace with minimal residual. The Arnoldi iteration is used to find this vector. The GMRES method was developed by Yousef Saad and Martin H. Schultz in 1986. It is a generalization and improvement of the MINRES method due to Paige and Saunders in 1975.
Show more
Related courses (7)
MATH-453: Computational linear algebra
This course provides an overview of advanced techniques for solving large-scale linear algebra problems, as they typically arise in applications. A central goal of this course is to give the ability t
MATH-505: HPC for numerical methods and data analysis
The objective of this course is to provide the necessary background for designing efficient parallel algorithms in scientific computing as well as in the analysis of large volumes of data. The operati
MATH-413: Statistics for data science
Statistics lies at the foundation of data science, providing a unifying theoretical and methodological backbone for the diverse tasks enountered in this emerging field. This course rigorously develops
Show more
Related lectures (35)
Grassmann manifold and Retractions
Covers the Grassmann manifold and retractions on submanifolds.
Neutron Diffusion Equation: Numerical Methods
Covers numerical methods for solving the neutron diffusion equation and addressing heterogeneous effects in thermal reactors.
Newton-Krylov-hookstep Methods
Covers Newton-Krylov-hookstep methods for finding roots of high-dimensional systems.
Show more