Krylov subspaceIn linear algebra, the order-r Krylov subspace generated by an n-by-n matrix A and a vector b of dimension n is the linear subspace spanned by the of b under the first r powers of A (starting from ), that is, The concept is named after Russian applied mathematician and naval engineer Alexei Krylov, who published a paper about it in 1931. Vectors are linearly independent until , and . Thus, denotes the maximal dimension of a Krylov subspace. The maximal dimension satisfies and . More exactly, , where is the minimal polynomial of .
Generalized minimal residual methodIn mathematics, the generalized minimal residual method (GMRES) is an iterative method for the numerical solution of an indefinite nonsymmetric system of linear equations. The method approximates the solution by the vector in a Krylov subspace with minimal residual. The Arnoldi iteration is used to find this vector. The GMRES method was developed by Yousef Saad and Martin H. Schultz in 1986. It is a generalization and improvement of the MINRES method due to Paige and Saunders in 1975.
Arnoldi iterationIn numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general (possibly non-Hermitian) matrices by constructing an orthonormal basis of the Krylov subspace, which makes it particularly useful when dealing with large sparse matrices. The Arnoldi method belongs to a class of linear algebra algorithms that give a partial result after a small number of iterations, in contrast to so-called direct methods which must complete to give any useful results (see for example, Householder transformation).
Conjugate gradient methodIn mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-definite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems.
Iterative methodIn computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the n-th approximation is derived from the previous ones. A specific implementation with termination criteria for a given iterative method like gradient descent, hill climbing, Newton's method, or quasi-Newton methods like BFGS, is an algorithm of the iterative method.
Numerical stabilityIn the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context. One is numerical linear algebra and the other is algorithms for solving ordinary and partial differential equations by discrete approximation. In numerical linear algebra, the principal concern is instabilities caused by proximity to singularities of various kinds, such as very small or nearly colliding eigenvalues.
Lanczos algorithmThe Lanczos algorithm is an iterative method devised by Cornelius Lanczos that is an adaptation of power methods to find the "most useful" (tending towards extreme highest/lowest) eigenvalues and eigenvectors of an Hermitian matrix, where is often but not necessarily much smaller than . Although computationally efficient in principle, the method as initially formulated was not useful, due to its numerical instability. In 1970, Ojalvo and Newman showed how to make the method numerically stable and applied it to the solution of very large engineering structures subjected to dynamic loading.
Numerical methods for linear least squaresNumerical methods for linear least squares entails the numerical analysis of linear least squares problems. A general approach to the least squares problem can be described as follows. Suppose that we can find an n by m matrix S such that XS is an orthogonal projection onto the image of X. Then a solution to our minimization problem is given by simply because is exactly a sought for orthogonal projection of onto an image of X (see the picture below and note that as explained in the next section the image of X is just a subspace generated by column vectors of X).
Floating-point arithmeticIn computing, floating-point arithmetic (FP) is arithmetic that represents subsets of real numbers using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. Numbers of this form are called floating-point numbers. For example, 12.345 is a floating-point number in base ten with five digits of precision: However, unlike 12.345, 12.3456 is not a floating-point number in base ten with five digits of precision—it needs six digits of precision; the nearest floating-point number with only five digits is 12.
Quantitative easingQuantitative easing (QE) is a monetary policy action where a central bank purchases predetermined amounts of government bonds or other financial assets in order to stimulate economic activity. Quantitative easing is a novel form of monetary policy that came into wide application after the financial crisis of 20072008. It is used to mitigate an economic recession when inflation is very low or negative, making standard monetary policy ineffective.