Diophantine approximationIn number theory, the study of Diophantine approximation deals with the approximation of real numbers by rational numbers. It is named after Diophantus of Alexandria. The first problem was to know how well a real number can be approximated by rational numbers. For this problem, a rational number a/b is a "good" approximation of a real number α if the absolute value of the difference between a/b and α may not decrease if a/b is replaced by another rational number with a smaller denominator.
Logarithm of a matrixIn mathematics, a logarithm of a matrix is another matrix such that the matrix exponential of the latter matrix equals the original matrix. It is thus a generalization of the scalar logarithm and in some sense an inverse function of the matrix exponential. Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm. The study of logarithms of matrices leads to Lie theory since when a matrix has a logarithm then it is in an element of a Lie group and the logarithm is the corresponding element of the vector space of the Lie algebra.
Divide-and-conquer eigenvalue algorithmDivide-and-conquer eigenvalue algorithms are a class of eigenvalue algorithms for Hermitian or real symmetric matrices that have recently (circa 1990s) become competitive in terms of stability and efficiency with more traditional algorithms such as the QR algorithm. The basic concept behind these algorithms is the divide-and-conquer approach from computer science. An eigenvalue problem is divided into two problems of roughly half the size, each of these are solved recursively, and the eigenvalues of the original problem are computed from the results of these smaller problems.
Frobenius inner productIn mathematics, the Frobenius inner product is a binary operation that takes two matrices and returns a scalar. It is often denoted . The operation is a component-wise inner product of two matrices as though they are vectors, and satisfies the axioms for an inner product. The two matrices must have the same dimension - same number of rows and columns, but are not restricted to be square matrices. Given two complex number-valued n×m matrices A and B, written explicitly as the Frobenius inner product is defined as, where the overline denotes the complex conjugate, and denotes Hermitian conjugate.
Sorting algorithmIn computer science, a sorting algorithm is an algorithm that puts elements of a list into an order. The most frequently used orders are numerical order and lexicographical order, and either ascending or descending. Efficient sorting is important for optimizing the efficiency of other algorithms (such as search and merge algorithms) that require input data to be in sorted lists. Sorting is also often useful for canonicalizing data and for producing human-readable output.
Jordan matrixIn the mathematical discipline of matrix theory, a Jordan matrix, named after Camille Jordan, is a block diagonal matrix over a ring R (whose identities are the zero 0 and one 1), where each block along the diagonal, called a Jordan block, has the following form: Every Jordan block is specified by its dimension n and its eigenvalue , and is denoted as Jλ,n. It is an matrix of zeroes everywhere except for the diagonal, which is filled with and for the superdiagonal, which is composed of ones.
PolynomialIn mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An example of a polynomial of a single indeterminate x is x2 − 4x + 7. An example with three indeterminates is x3 + 2xyz2 − yz + 1. Polynomials appear in many areas of mathematics and science.
Square matrixIn mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied. Square matrices are often used to represent simple linear transformations, such as shearing or rotation. For example, if is a square matrix representing a rotation (rotation matrix) and is a column vector describing the position of a point in space, the product yields another column vector describing the position of that point after that rotation.
Invertible matrixIn linear algebra, an n-by-n square matrix A is called invertible (also nonsingular, nondegenerate or (rarely used) regular), if there exists an n-by-n square matrix B such that where In denotes the n-by-n identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix B is uniquely determined by A, and is called the (multiplicative) inverse of A, denoted by A−1. Matrix inversion is the process of finding the matrix B that satisfies the prior equation for a given invertible matrix A.
Nonnegative matrixIn mathematics, a nonnegative matrix, written is a matrix in which all the elements are equal to or greater than zero, that is, A positive matrix is a matrix in which all the elements are strictly greater than zero. The set of positive matrices is a subset of all non-negative matrices. While such matrices are commonly found, the term is only occasionally used due to the possible confusion with positive-definite matrices, which are different. A matrix which is both non-negative and is positive semidefinite is called a doubly non-negative matrix.