Matrix decompositionIn the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems. In numerical analysis, different decompositions are used to implement efficient matrix algorithms. For instance, when solving a system of linear equations , the matrix A can be decomposed via the LU decomposition.
Jordan normal formIn linear algebra, a Jordan normal form, also known as a Jordan canonical form (JCF), is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to some basis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal (on the superdiagonal), and with identical diagonal entries to the left and below them. Let V be a vector space over a field K.
Pierre-Simon LaplacePierre-Simon, Marquis de Laplace (ləˈplɑ:s; pjɛʁ simɔ̃ laplas; 23 March 1749 – 5 March 1827) was a French scholar and polymath whose work was important to the development of engineering, mathematics, statistics, physics, astronomy, and philosophy. He summarized and extended the work of his predecessors in his five-volume Mécanique céleste (Celestial Mechanics) (1799–1825). This work translated the geometric study of classical mechanics to one based on calculus, opening up a broader range of problems.
Fixed point (mathematics)hatnote|1=Fixed points in mathematics are not to be confused with other uses of "fixed point", or stationary points where math|1=f(x) = 0. In mathematics, a fixed point (sometimes shortened to fixpoint), also known as an invariant point, is a value that does not change under a given transformation. Specifically for functions, a fixed point is an element that is mapped to itself by the function. Formally, c is a fixed point of a function f if c belongs to both the domain and the codomain of f, and f(c) = c.
Householder transformationIn linear algebra, a Householder transformation (also known as a Householder reflection or elementary reflector) is a linear transformation that describes a reflection about a plane or hyperplane containing the origin. The Householder transformation was used in a 1958 paper by Alston Scott Householder. Its analogue over general inner product spaces is the Householder operator. The reflection hyperplane can be defined by its normal vector, a unit vector (a vector with length ) that is orthogonal to the hyperplane.
Hessenberg matrixIn linear algebra, a Hessenberg matrix is a special kind of square matrix, one that is "almost" triangular. To be exact, an upper Hessenberg matrix has zero entries below the first subdiagonal, and a lower Hessenberg matrix has zero entries above the first superdiagonal. They are named after Karl Hessenberg. A Hessenberg decomposition is a matrix decomposition of a matrix into a unitary matrix and a Hessenberg matrix such that where denotes the conjugate transpose.
Tridiagonal matrixIn linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal (the first diagonal below this), and the supradiagonal/upper diagonal (the first diagonal above the main diagonal). For example, the following matrix is tridiagonal: The determinant of a tridiagonal matrix is given by the continuant of its elements. An orthogonal transformation of a symmetric (or Hermitian) matrix to tridiagonal form can be done with the Lanczos algorithm.
Inverse iterationIn numerical analysis, inverse iteration (also known as the inverse power method) is an iterative eigenvalue algorithm. It allows one to find an approximate eigenvector when an approximation to a corresponding eigenvalue is already known. The method is conceptually similar to the power method. It appears to have originally been developed to compute resonance frequencies in the field of structural mechanics.
Companion matrixIn linear algebra, the Frobenius companion matrix of the monic polynomial is the square matrix defined as Some authors use the transpose of this matrix, , which is more convenient for some purposes such as linear recurrence relations (see below). is defined from the coefficients of , while the characteristic polynomial as well as the minimal polynomial of are equal to . In this sense, the matrix and the polynomial are "companions". Any matrix A with entries in a field F has characteristic polynomial , which in turn has companion matrix .
Invariant subspaceIn mathematics, an invariant subspace of a linear mapping T : V → V i.e. from some vector space V to itself, is a subspace W of V that is preserved by T; that is, T(W) ⊆ W. Consider a linear mapping An invariant subspace of has the property that all vectors are transformed by into vectors also contained in . This can be stated as Since maps every vector in into Since a linear map has to map A basis of a 1-dimensional space is simply a non-zero vector . Consequently, any vector can be represented as where is a scalar.