Concept

Generalized minimal residual method

Summary
In mathematics, the generalized minimal residual method (GMRES) is an iterative method for the numerical solution of an indefinite nonsymmetric system of linear equations. The method approximates the solution by the vector in a Krylov subspace with minimal residual. The Arnoldi iteration is used to find this vector. The GMRES method was developed by Yousef Saad and Martin H. Schultz in 1986. It is a generalization and improvement of the MINRES method due to Paige and Saunders in 1975. The MINRES method requires that the matrix is symmetric, but has the advantage that it only requires handling of three vectors. GMRES is a special case of the DIIS method developed by Peter Pulay in 1980. DIIS is applicable to non-linear systems. Denote the Euclidean norm of any vector v by . Denote the (square) system of linear equations to be solved by The matrix A is assumed to be invertible of size m-by-m. Furthermore, it is assumed that b is normalized, i.e., that . The n-th Krylov subspace for this problem is where is the initial error given an initial guess . Clearly if . GMRES approximates the exact solution of by the vector that minimizes the Euclidean norm of the residual . The vectors might be close to linearly dependent, so instead of this basis, the Arnoldi iteration is used to find orthonormal vectors which form a basis for . In particular, . Therefore, the vector can be written as with , where is the m-by-n matrix formed by . In other words, finding finding the n-th approximation of the solution (i.e., ) is reduced to finding the vector , which is determined via minimizing the residue as described below. The Arnoldi process also constructs , an ()-by- upper Hessenberg matrix which satisfies an equality which is used to simplify the calculation of (see below). Note that, for symmetric matrices, a symmetric tri-diagonal matrix is actually achieved, resulting in the MINRES method. Because columns of are orthonormal, we have where is the first vector in the standard basis of , and being the first trial vector (usually zero).
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Ontological neighbourhood
Related courses (1)
MATH-351: Advanced numerical analysis
The student will learn state-of-the-art algorithms for solving differential equations. The analysis and implementation of these algorithms will be discussed in some detail.
Related lectures (32)
Heat Equation: Fourier Series
Explores the heat equation solutions using Fourier series and energy conservation.
Energy Equilibrium and Newton CG Method
Covers continuum mechanics, linear elasticity, force balance, divergence, finite element discretization, energy minimization, and Newton's method.
Elastic Solid Energy Computation
Covers the computation of elastic energy in solid mechanics and equilibrium steps using conjugate gradients.
Show more
Related publications (60)
Related concepts (9)
Krylov subspace
In linear algebra, the order-r Krylov subspace generated by an n-by-n matrix A and a vector b of dimension n is the linear subspace spanned by the of b under the first r powers of A (starting from ), that is, The concept is named after Russian applied mathematician and naval engineer Alexei Krylov, who published a paper about it in 1931. Vectors are linearly independent until , and . Thus, denotes the maximal dimension of a Krylov subspace. The maximal dimension satisfies and . More exactly, , where is the minimal polynomial of .
Numerical linear algebra
Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of.
Arnoldi iteration
In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general (possibly non-Hermitian) matrices by constructing an orthonormal basis of the Krylov subspace, which makes it particularly useful when dealing with large sparse matrices. The Arnoldi method belongs to a class of linear algebra algorithms that give a partial result after a small number of iterations, in contrast to so-called direct methods which must complete to give any useful results (see for example, Householder transformation).
Show more