Diagonalizable matrixIn linear algebra, a square matrix is called diagonalizable or non-defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix and a diagonal matrix such that , or equivalently . (Such , are not unique.) For a finite-dimensional vector space , a linear map is called diagonalizable if there exists an ordered basis of consisting of eigenvectors of .
Hankel matrixIn linear algebra, a Hankel matrix (or catalecticant matrix), named after Hermann Hankel, is a square matrix in which each ascending skew-diagonal from left to right is constant, e.g.: More generally, a Hankel matrix is any matrix of the form In terms of the components, if the element of is denoted with , and assuming , then we have for all Any Hankel matrix is symmetric. Let be the exchange matrix. If is a Hankel matrix, then where is a Toeplitz matrix. If is real symmetric, then will have the same eigenvalues as up to sign.
Linear equationIn mathematics, a linear equation is an equation that may be put in the form where are the variables (or unknowns), and are the coefficients, which are often real numbers. The coefficients may be considered as parameters of the equation, and may be arbitrary expressions, provided they do not contain any of the variables. To yield a meaningful equation, the coefficients are required to not all be zero. Alternatively, a linear equation can be obtained by equating to zero a linear polynomial over some field, from which the coefficients are taken.
Symmetric matrixIn linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, Because equal matrices have equal dimensions, only square matrices can be symmetric. The entries of a symmetric matrix are symmetric with respect to the main diagonal. So if denotes the entry in the th row and th column then for all indices and Every square diagonal matrix is symmetric, since all off-diagonal elements are zero. Similarly in characteristic different from 2, each diagonal element of a skew-symmetric matrix must be zero, since each is its own negative.
System of linear equationsIn mathematics, a system of linear equations (or linear system) is a collection of one or more linear equations involving the same variables. For example, is a system of three equations in the three variables x, y, z. A solution to a linear system is an assignment of values to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by the ordered triple since it makes all three equations valid. The word "system" indicates that the equations should be considered collectively, rather than individually.
Dimension (vector space)In mathematics, the dimension of a vector space V is the cardinality (i.e., the number of vectors) of a basis of V over its base field. It is sometimes called Hamel dimension (after Georg Hamel) or algebraic dimension to distinguish it from other types of dimension. For every vector space there exists a basis, and all bases of a vector space have equal cardinality; as a result, the dimension of a vector space is uniquely defined. We say is if the dimension of is finite, and if its dimension is infinite.
Hermitian matrixIn mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j: or in matrix form: Hermitian matrices can be understood as the complex extension of real symmetric matrices.
LU decompositionIn numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix (see matrix decomposition). The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix.
Dimensionality reductionDimensionality reduction, or dimension reduction, is the transformation of data from a high-dimensional space into a low-dimensional space so that the low-dimensional representation retains some meaningful properties of the original data, ideally close to its intrinsic dimension. Working in high-dimensional spaces can be undesirable for many reasons; raw data are often sparse as a consequence of the curse of dimensionality, and analyzing the data is usually computationally intractable (hard to control or deal with).
Cholesky decompositionIn linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ʃəˈlɛski ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.