Matrix (mathematics)In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object. For example, is a matrix with two rows and three columns. This is often referred to as a "two by three matrix", a " matrix", or a matrix of dimension . Without further specifications, matrices represent linear maps, and allow explicit computations in linear algebra.
Spectral radiusIn mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues. More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectrum. The spectral radius is often denoted by ρ(·). Let λ1, ..., λn be the eigenvalues of a matrix A ∈ Cn×n. The spectral radius of A is defined as The spectral radius can be thought of as an infimum of all norms of a matrix.
Singular valueIn mathematics, in particular functional analysis, the singular values, or s-numbers of a compact operator acting between Hilbert spaces and , are the square roots of the (necessarily non-negative) eigenvalues of the self-adjoint operator (where denotes the adjoint of ). The singular values are non-negative real numbers, usually listed in decreasing order (σ1(T), σ2(T), ...). The largest singular value σ1(T) is equal to the operator norm of T (see Min-max theorem).
Singular value decompositionIn linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition. Specifically, the singular value decomposition of an complex matrix M is a factorization of the form where U is an complex unitary matrix, is an rectangular diagonal matrix with non-negative real numbers on the diagonal, V is an complex unitary matrix, and is the conjugate transpose of V.
Rotation matrixIn linear algebra, a rotation matrix is a transformation matrix that is used to perform a rotation in Euclidean space. For example, using the convention below, the matrix rotates points in the xy plane counterclockwise through an angle θ about the origin of a two-dimensional Cartesian coordinate system. To perform the rotation on a plane point with standard coordinates v = (x, y), it should be written as a column vector, and multiplied by the matrix R: If x and y are the endpoint coordinates of a vector, where x is cosine and y is sine, then the above equations become the trigonometric summation angle formulae.
Square root of a matrixIn mathematics, the square root of a matrix extends the notion of square root from numbers to matrices. A matrix B is said to be a square root of A if the matrix product BB is equal to A. Some authors use the name square root or the notation A1/2 only for the specific case when A is positive semidefinite, to denote the unique matrix B that is positive semidefinite and such that BB = BTB = A (for real-valued matrices, where BT is the transpose of B).
Numerical linear algebraNumerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of.
Polar decompositionIn mathematics, the polar decomposition of a square real or complex matrix is a factorization of the form , where is a unitary matrix and is a positive semi-definite Hermitian matrix ( is an orthogonal matrix and is a positive semi-definite symmetric matrix in the real case), both square and of the same size. Intuitively, if a real matrix is interpreted as a linear transformation of -dimensional space , the polar decomposition separates it into a rotation or reflection of , and a scaling of the space along a set of orthogonal axes.
Cholesky decompositionIn linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ʃəˈlɛski ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.
Operator normIn mathematics, the operator norm measures the "size" of certain linear operators by assigning each a real number called its . Formally, it is a norm defined on the space of bounded linear operators between two given normed vector spaces. Informally, the operator norm of a linear map is the maximum factor by which it "lengthens" vectors. Given two normed vector spaces and (over the same base field, either the real numbers or the complex numbers ), a linear map is continuous if and only if there exists a real number such that The norm on the left is the one in and the norm on the right is the one in .